00:00:00.000 Started by upstream project "autotest-per-patch" build number 132503 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.031 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.032 The recommended git tool is: git 00:00:00.033 using credential 00000000-0000-0000-0000-000000000002 00:00:00.035 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.049 Fetching changes from the remote Git repository 00:00:00.053 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.067 Using shallow fetch with depth 1 00:00:00.067 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.067 > git --version # timeout=10 00:00:00.081 > git --version # 'git version 2.39.2' 00:00:00.081 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.096 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.096 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.808 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.821 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.833 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.833 > git config core.sparsecheckout # timeout=10 00:00:04.846 > git read-tree -mu HEAD # timeout=10 00:00:04.863 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.887 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.887 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.975 [Pipeline] Start of Pipeline 00:00:04.985 [Pipeline] library 00:00:04.986 Loading library shm_lib@master 00:00:04.986 Library shm_lib@master is cached. Copying from home. 00:00:04.996 [Pipeline] node 00:00:05.003 Running on VM-host-SM17 in /var/jenkins/workspace/raid-vg-autotest_2 00:00:05.005 [Pipeline] { 00:00:05.011 [Pipeline] catchError 00:00:05.012 [Pipeline] { 00:00:05.019 [Pipeline] wrap 00:00:05.024 [Pipeline] { 00:00:05.028 [Pipeline] stage 00:00:05.029 [Pipeline] { (Prologue) 00:00:05.042 [Pipeline] echo 00:00:05.043 Node: VM-host-SM17 00:00:05.046 [Pipeline] cleanWs 00:00:05.055 [WS-CLEANUP] Deleting project workspace... 00:00:05.055 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.061 [WS-CLEANUP] done 00:00:05.253 [Pipeline] setCustomBuildProperty 00:00:05.376 [Pipeline] httpRequest 00:00:05.674 [Pipeline] echo 00:00:05.676 Sorcerer 10.211.164.20 is alive 00:00:05.687 [Pipeline] retry 00:00:05.688 [Pipeline] { 00:00:05.701 [Pipeline] httpRequest 00:00:05.704 HttpMethod: GET 00:00:05.704 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.705 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.712 Response Code: HTTP/1.1 200 OK 00:00:05.712 Success: Status code 200 is in the accepted range: 200,404 00:00:05.713 Saving response body to /var/jenkins/workspace/raid-vg-autotest_2/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:19.796 [Pipeline] } 00:00:19.816 [Pipeline] // retry 00:00:19.821 [Pipeline] sh 00:00:20.100 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:20.117 [Pipeline] httpRequest 00:00:20.522 [Pipeline] echo 00:00:20.524 Sorcerer 10.211.164.20 is alive 00:00:20.534 [Pipeline] retry 00:00:20.537 [Pipeline] { 00:00:20.551 [Pipeline] httpRequest 00:00:20.557 HttpMethod: GET 00:00:20.557 URL: http://10.211.164.20/packages/spdk_f1dd81af35d0bc4d9f5dce5ca525d9ba3cf32cd3.tar.gz 00:00:20.558 Sending request to url: http://10.211.164.20/packages/spdk_f1dd81af35d0bc4d9f5dce5ca525d9ba3cf32cd3.tar.gz 00:00:20.569 Response Code: HTTP/1.1 200 OK 00:00:20.570 Success: Status code 200 is in the accepted range: 200,404 00:00:20.570 Saving response body to /var/jenkins/workspace/raid-vg-autotest_2/spdk_f1dd81af35d0bc4d9f5dce5ca525d9ba3cf32cd3.tar.gz 00:04:49.909 [Pipeline] } 00:04:49.927 [Pipeline] // retry 00:04:49.935 [Pipeline] sh 00:04:50.217 + tar --no-same-owner -xf spdk_f1dd81af35d0bc4d9f5dce5ca525d9ba3cf32cd3.tar.gz 00:04:53.516 [Pipeline] sh 00:04:53.795 + git -C spdk log --oneline -n5 00:04:53.795 f1dd81af3 nvme: add spdk_nvme_poll_group_get_fd_group() 00:04:53.795 4da34a829 thread: fd_group-based interrupts 00:04:53.795 10ec63d4e thread: move interrupt allocation to a function 00:04:53.796 393e80fcd util: add method for setting fd_group's wrapper 00:04:53.796 1e9cebf19 util: multi-level fd_group nesting 00:04:53.816 [Pipeline] writeFile 00:04:53.834 [Pipeline] sh 00:04:54.118 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:04:54.130 [Pipeline] sh 00:04:54.509 + cat autorun-spdk.conf 00:04:54.509 SPDK_RUN_FUNCTIONAL_TEST=1 00:04:54.509 SPDK_RUN_ASAN=1 00:04:54.509 SPDK_RUN_UBSAN=1 00:04:54.509 SPDK_TEST_RAID=1 00:04:54.509 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:54.516 RUN_NIGHTLY=0 00:04:54.517 [Pipeline] } 00:04:54.529 [Pipeline] // stage 00:04:54.544 [Pipeline] stage 00:04:54.546 [Pipeline] { (Run VM) 00:04:54.557 [Pipeline] sh 00:04:54.838 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:04:54.838 + echo 'Start stage prepare_nvme.sh' 00:04:54.838 Start stage prepare_nvme.sh 00:04:54.838 + [[ -n 6 ]] 00:04:54.838 + disk_prefix=ex6 00:04:54.838 + [[ -n /var/jenkins/workspace/raid-vg-autotest_2 ]] 00:04:54.838 + [[ -e /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf ]] 00:04:54.838 + source /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf 00:04:54.838 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:54.838 ++ SPDK_RUN_ASAN=1 00:04:54.838 ++ SPDK_RUN_UBSAN=1 00:04:54.838 ++ SPDK_TEST_RAID=1 00:04:54.838 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:54.838 ++ RUN_NIGHTLY=0 00:04:54.838 + cd /var/jenkins/workspace/raid-vg-autotest_2 00:04:54.838 + nvme_files=() 00:04:54.838 + declare -A nvme_files 00:04:54.838 + backend_dir=/var/lib/libvirt/images/backends 00:04:54.838 + nvme_files['nvme.img']=5G 00:04:54.838 + nvme_files['nvme-cmb.img']=5G 00:04:54.838 + nvme_files['nvme-multi0.img']=4G 00:04:54.838 + nvme_files['nvme-multi1.img']=4G 00:04:54.838 + nvme_files['nvme-multi2.img']=4G 00:04:54.838 + nvme_files['nvme-openstack.img']=8G 00:04:54.838 + nvme_files['nvme-zns.img']=5G 00:04:54.838 + (( SPDK_TEST_NVME_PMR == 1 )) 00:04:54.838 + (( SPDK_TEST_FTL == 1 )) 00:04:54.838 + (( SPDK_TEST_NVME_FDP == 1 )) 00:04:54.838 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:04:54.838 + for nvme in "${!nvme_files[@]}" 00:04:54.838 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:04:54.838 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:04:54.838 + for nvme in "${!nvme_files[@]}" 00:04:54.838 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:04:54.838 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:04:54.838 + for nvme in "${!nvme_files[@]}" 00:04:54.838 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:04:54.838 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:04:54.838 + for nvme in "${!nvme_files[@]}" 00:04:54.838 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:04:54.838 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:04:54.838 + for nvme in "${!nvme_files[@]}" 00:04:54.838 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:04:54.838 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:04:54.838 + for nvme in "${!nvme_files[@]}" 00:04:54.838 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:04:54.838 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:04:54.838 + for nvme in "${!nvme_files[@]}" 00:04:54.838 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:04:54.838 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:04:54.838 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:04:54.838 + echo 'End stage prepare_nvme.sh' 00:04:54.838 End stage prepare_nvme.sh 00:04:54.849 [Pipeline] sh 00:04:55.129 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:04:55.129 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex6-nvme.img -b /var/lib/libvirt/images/backends/ex6-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img -H -a -v -f fedora39 00:04:55.129 00:04:55.129 DIR=/var/jenkins/workspace/raid-vg-autotest_2/spdk/scripts/vagrant 00:04:55.129 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest_2/spdk 00:04:55.129 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest_2 00:04:55.129 HELP=0 00:04:55.129 DRY_RUN=0 00:04:55.129 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme.img,/var/lib/libvirt/images/backends/ex6-nvme-multi0.img, 00:04:55.129 NVME_DISKS_TYPE=nvme,nvme, 00:04:55.129 NVME_AUTO_CREATE=0 00:04:55.129 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img, 00:04:55.129 NVME_CMB=,, 00:04:55.129 NVME_PMR=,, 00:04:55.129 NVME_ZNS=,, 00:04:55.129 NVME_MS=,, 00:04:55.129 NVME_FDP=,, 00:04:55.130 SPDK_VAGRANT_DISTRO=fedora39 00:04:55.130 SPDK_VAGRANT_VMCPU=10 00:04:55.130 SPDK_VAGRANT_VMRAM=12288 00:04:55.130 SPDK_VAGRANT_PROVIDER=libvirt 00:04:55.130 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:04:55.130 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:04:55.130 SPDK_OPENSTACK_NETWORK=0 00:04:55.130 VAGRANT_PACKAGE_BOX=0 00:04:55.130 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:04:55.130 FORCE_DISTRO=true 00:04:55.130 VAGRANT_BOX_VERSION= 00:04:55.130 EXTRA_VAGRANTFILES= 00:04:55.130 NIC_MODEL=e1000 00:04:55.130 00:04:55.130 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt' 00:04:55.130 /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest_2 00:04:58.419 Bringing machine 'default' up with 'libvirt' provider... 00:04:59.358 ==> default: Creating image (snapshot of base box volume). 00:04:59.358 ==> default: Creating domain with the following settings... 00:04:59.358 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732536234_e633ab0eae10bcb32134 00:04:59.358 ==> default: -- Domain type: kvm 00:04:59.358 ==> default: -- Cpus: 10 00:04:59.358 ==> default: -- Feature: acpi 00:04:59.358 ==> default: -- Feature: apic 00:04:59.358 ==> default: -- Feature: pae 00:04:59.358 ==> default: -- Memory: 12288M 00:04:59.358 ==> default: -- Memory Backing: hugepages: 00:04:59.358 ==> default: -- Management MAC: 00:04:59.358 ==> default: -- Loader: 00:04:59.358 ==> default: -- Nvram: 00:04:59.358 ==> default: -- Base box: spdk/fedora39 00:04:59.358 ==> default: -- Storage pool: default 00:04:59.358 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732536234_e633ab0eae10bcb32134.img (20G) 00:04:59.358 ==> default: -- Volume Cache: default 00:04:59.358 ==> default: -- Kernel: 00:04:59.358 ==> default: -- Initrd: 00:04:59.358 ==> default: -- Graphics Type: vnc 00:04:59.358 ==> default: -- Graphics Port: -1 00:04:59.358 ==> default: -- Graphics IP: 127.0.0.1 00:04:59.358 ==> default: -- Graphics Password: Not defined 00:04:59.358 ==> default: -- Video Type: cirrus 00:04:59.358 ==> default: -- Video VRAM: 9216 00:04:59.358 ==> default: -- Sound Type: 00:04:59.358 ==> default: -- Keymap: en-us 00:04:59.358 ==> default: -- TPM Path: 00:04:59.358 ==> default: -- INPUT: type=mouse, bus=ps2 00:04:59.358 ==> default: -- Command line args: 00:04:59.358 ==> default: -> value=-device, 00:04:59.358 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:04:59.358 ==> default: -> value=-drive, 00:04:59.358 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-0-drive0, 00:04:59.359 ==> default: -> value=-device, 00:04:59.359 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:59.359 ==> default: -> value=-device, 00:04:59.359 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:04:59.359 ==> default: -> value=-drive, 00:04:59.359 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:04:59.359 ==> default: -> value=-device, 00:04:59.359 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:59.359 ==> default: -> value=-drive, 00:04:59.359 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:04:59.359 ==> default: -> value=-device, 00:04:59.359 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:59.359 ==> default: -> value=-drive, 00:04:59.359 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:04:59.359 ==> default: -> value=-device, 00:04:59.359 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:59.617 ==> default: Creating shared folders metadata... 00:04:59.617 ==> default: Starting domain. 00:05:02.150 ==> default: Waiting for domain to get an IP address... 00:05:20.230 ==> default: Waiting for SSH to become available... 00:05:21.165 ==> default: Configuring and enabling network interfaces... 00:05:24.449 default: SSH address: 192.168.121.250:22 00:05:24.449 default: SSH username: vagrant 00:05:24.449 default: SSH auth method: private key 00:05:26.978 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:05:35.088 ==> default: Mounting SSHFS shared folder... 00:05:36.465 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:05:36.465 ==> default: Checking Mount.. 00:05:37.399 ==> default: Folder Successfully Mounted! 00:05:37.399 ==> default: Running provisioner: file... 00:05:38.334 default: ~/.gitconfig => .gitconfig 00:05:38.901 00:05:38.901 SUCCESS! 00:05:38.901 00:05:38.901 cd to /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:05:38.901 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:05:38.901 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:05:38.901 00:05:38.909 [Pipeline] } 00:05:38.925 [Pipeline] // stage 00:05:38.935 [Pipeline] dir 00:05:38.936 Running in /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt 00:05:38.938 [Pipeline] { 00:05:38.952 [Pipeline] catchError 00:05:38.954 [Pipeline] { 00:05:38.968 [Pipeline] sh 00:05:39.249 + vagrant ssh-config --host vagrant 00:05:39.249 + sed -ne /^Host/,$p 00:05:39.249 + tee ssh_conf 00:05:43.438 Host vagrant 00:05:43.438 HostName 192.168.121.250 00:05:43.438 User vagrant 00:05:43.438 Port 22 00:05:43.438 UserKnownHostsFile /dev/null 00:05:43.438 StrictHostKeyChecking no 00:05:43.439 PasswordAuthentication no 00:05:43.439 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:05:43.439 IdentitiesOnly yes 00:05:43.439 LogLevel FATAL 00:05:43.439 ForwardAgent yes 00:05:43.439 ForwardX11 yes 00:05:43.439 00:05:43.453 [Pipeline] withEnv 00:05:43.456 [Pipeline] { 00:05:43.473 [Pipeline] sh 00:05:43.759 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:05:43.759 source /etc/os-release 00:05:43.759 [[ -e /image.version ]] && img=$(< /image.version) 00:05:43.759 # Minimal, systemd-like check. 00:05:43.759 if [[ -e /.dockerenv ]]; then 00:05:43.759 # Clear garbage from the node's name: 00:05:43.759 # agt-er_autotest_547-896 -> autotest_547-896 00:05:43.759 # $HOSTNAME is the actual container id 00:05:43.759 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:05:43.759 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:05:43.759 # We can assume this is a mount from a host where container is running, 00:05:43.759 # so fetch its hostname to easily identify the target swarm worker. 00:05:43.759 container="$(< /etc/hostname) ($agent)" 00:05:43.759 else 00:05:43.759 # Fallback 00:05:43.759 container=$agent 00:05:43.759 fi 00:05:43.759 fi 00:05:43.759 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:05:43.759 00:05:44.028 [Pipeline] } 00:05:44.045 [Pipeline] // withEnv 00:05:44.053 [Pipeline] setCustomBuildProperty 00:05:44.067 [Pipeline] stage 00:05:44.070 [Pipeline] { (Tests) 00:05:44.086 [Pipeline] sh 00:05:44.365 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:05:44.636 [Pipeline] sh 00:05:44.916 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:05:45.187 [Pipeline] timeout 00:05:45.188 Timeout set to expire in 1 hr 30 min 00:05:45.189 [Pipeline] { 00:05:45.203 [Pipeline] sh 00:05:45.482 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:05:46.052 HEAD is now at f1dd81af3 nvme: add spdk_nvme_poll_group_get_fd_group() 00:05:46.064 [Pipeline] sh 00:05:46.345 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:05:46.618 [Pipeline] sh 00:05:46.898 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:05:46.914 [Pipeline] sh 00:05:47.193 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:05:47.452 ++ readlink -f spdk_repo 00:05:47.452 + DIR_ROOT=/home/vagrant/spdk_repo 00:05:47.452 + [[ -n /home/vagrant/spdk_repo ]] 00:05:47.452 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:05:47.452 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:05:47.452 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:05:47.452 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:05:47.452 + [[ -d /home/vagrant/spdk_repo/output ]] 00:05:47.452 + [[ raid-vg-autotest == pkgdep-* ]] 00:05:47.452 + cd /home/vagrant/spdk_repo 00:05:47.452 + source /etc/os-release 00:05:47.452 ++ NAME='Fedora Linux' 00:05:47.452 ++ VERSION='39 (Cloud Edition)' 00:05:47.452 ++ ID=fedora 00:05:47.452 ++ VERSION_ID=39 00:05:47.452 ++ VERSION_CODENAME= 00:05:47.452 ++ PLATFORM_ID=platform:f39 00:05:47.452 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:05:47.452 ++ ANSI_COLOR='0;38;2;60;110;180' 00:05:47.452 ++ LOGO=fedora-logo-icon 00:05:47.452 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:05:47.452 ++ HOME_URL=https://fedoraproject.org/ 00:05:47.452 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:05:47.452 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:05:47.452 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:05:47.452 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:05:47.452 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:05:47.452 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:05:47.452 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:05:47.452 ++ SUPPORT_END=2024-11-12 00:05:47.452 ++ VARIANT='Cloud Edition' 00:05:47.452 ++ VARIANT_ID=cloud 00:05:47.452 + uname -a 00:05:47.452 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:05:47.452 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:47.711 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:47.992 Hugepages 00:05:47.992 node hugesize free / total 00:05:47.992 node0 1048576kB 0 / 0 00:05:47.992 node0 2048kB 0 / 0 00:05:47.992 00:05:47.992 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:47.992 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:47.992 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:47.992 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:47.992 + rm -f /tmp/spdk-ld-path 00:05:47.992 + source autorun-spdk.conf 00:05:47.992 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:47.992 ++ SPDK_RUN_ASAN=1 00:05:47.992 ++ SPDK_RUN_UBSAN=1 00:05:47.992 ++ SPDK_TEST_RAID=1 00:05:47.992 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:47.992 ++ RUN_NIGHTLY=0 00:05:47.992 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:05:47.992 + [[ -n '' ]] 00:05:47.992 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:05:47.992 + for M in /var/spdk/build-*-manifest.txt 00:05:47.992 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:05:47.992 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:05:47.992 + for M in /var/spdk/build-*-manifest.txt 00:05:47.992 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:05:47.992 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:05:47.992 + for M in /var/spdk/build-*-manifest.txt 00:05:47.992 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:05:47.992 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:05:47.992 ++ uname 00:05:47.992 + [[ Linux == \L\i\n\u\x ]] 00:05:47.992 + sudo dmesg -T 00:05:47.992 + sudo dmesg --clear 00:05:47.992 + dmesg_pid=5210 00:05:47.992 + [[ Fedora Linux == FreeBSD ]] 00:05:47.992 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:47.992 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:47.992 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:05:47.992 + [[ -x /usr/src/fio-static/fio ]] 00:05:47.992 + sudo dmesg -Tw 00:05:47.992 + export FIO_BIN=/usr/src/fio-static/fio 00:05:47.992 + FIO_BIN=/usr/src/fio-static/fio 00:05:47.992 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:05:47.992 + [[ ! -v VFIO_QEMU_BIN ]] 00:05:47.992 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:05:47.992 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:47.992 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:47.992 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:05:47.992 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:47.992 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:47.992 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:47.992 12:04:44 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:05:47.992 12:04:44 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:47.992 12:04:44 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:47.992 12:04:44 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:05:47.992 12:04:44 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:05:47.992 12:04:44 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:05:47.992 12:04:44 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:47.992 12:04:44 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:05:47.992 12:04:44 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:05:47.992 12:04:44 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:48.255 12:04:44 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:05:48.255 12:04:44 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:48.255 12:04:44 -- scripts/common.sh@15 -- $ shopt -s extglob 00:05:48.255 12:04:44 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:05:48.255 12:04:44 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:48.255 12:04:44 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:48.255 12:04:44 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.255 12:04:44 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.255 12:04:44 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.255 12:04:44 -- paths/export.sh@5 -- $ export PATH 00:05:48.255 12:04:44 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.255 12:04:44 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:05:48.255 12:04:44 -- common/autobuild_common.sh@493 -- $ date +%s 00:05:48.255 12:04:44 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732536284.XXXXXX 00:05:48.255 12:04:44 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732536284.qAr1BT 00:05:48.255 12:04:44 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:05:48.255 12:04:44 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:05:48.255 12:04:44 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:05:48.255 12:04:44 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:05:48.255 12:04:44 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:05:48.255 12:04:44 -- common/autobuild_common.sh@509 -- $ get_config_params 00:05:48.255 12:04:44 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:05:48.255 12:04:44 -- common/autotest_common.sh@10 -- $ set +x 00:05:48.255 12:04:44 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:05:48.255 12:04:44 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:05:48.255 12:04:44 -- pm/common@17 -- $ local monitor 00:05:48.255 12:04:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:48.255 12:04:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:48.255 12:04:44 -- pm/common@25 -- $ sleep 1 00:05:48.255 12:04:44 -- pm/common@21 -- $ date +%s 00:05:48.255 12:04:44 -- pm/common@21 -- $ date +%s 00:05:48.255 12:04:44 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732536284 00:05:48.255 12:04:44 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732536284 00:05:48.255 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732536284_collect-vmstat.pm.log 00:05:48.255 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732536284_collect-cpu-load.pm.log 00:05:49.192 12:04:45 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:05:49.192 12:04:45 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:05:49.192 12:04:45 -- spdk/autobuild.sh@12 -- $ umask 022 00:05:49.192 12:04:45 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:05:49.192 12:04:45 -- spdk/autobuild.sh@16 -- $ date -u 00:05:49.192 Mon Nov 25 12:04:45 PM UTC 2024 00:05:49.192 12:04:45 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:05:49.192 v25.01-pre-225-gf1dd81af3 00:05:49.192 12:04:45 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:05:49.192 12:04:45 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:05:49.192 12:04:45 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:05:49.192 12:04:45 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:05:49.192 12:04:45 -- common/autotest_common.sh@10 -- $ set +x 00:05:49.192 ************************************ 00:05:49.192 START TEST asan 00:05:49.192 ************************************ 00:05:49.192 using asan 00:05:49.192 12:04:45 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:05:49.192 00:05:49.192 real 0m0.000s 00:05:49.192 user 0m0.000s 00:05:49.192 sys 0m0.000s 00:05:49.192 12:04:45 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:49.192 12:04:45 asan -- common/autotest_common.sh@10 -- $ set +x 00:05:49.192 ************************************ 00:05:49.192 END TEST asan 00:05:49.192 ************************************ 00:05:49.192 12:04:45 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:05:49.192 12:04:45 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:05:49.192 12:04:45 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:05:49.192 12:04:45 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:05:49.192 12:04:45 -- common/autotest_common.sh@10 -- $ set +x 00:05:49.192 ************************************ 00:05:49.192 START TEST ubsan 00:05:49.192 ************************************ 00:05:49.192 using ubsan 00:05:49.192 12:04:45 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:05:49.192 00:05:49.192 real 0m0.000s 00:05:49.192 user 0m0.000s 00:05:49.192 sys 0m0.000s 00:05:49.192 ************************************ 00:05:49.192 12:04:45 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:49.192 12:04:45 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:05:49.192 END TEST ubsan 00:05:49.192 ************************************ 00:05:49.451 12:04:45 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:05:49.451 12:04:45 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:05:49.451 12:04:45 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:05:49.451 12:04:45 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:05:49.451 12:04:45 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:05:49.451 12:04:45 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:05:49.451 12:04:45 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:05:49.451 12:04:45 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:05:49.451 12:04:45 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:05:49.451 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:49.451 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:05:50.021 Using 'verbs' RDMA provider 00:06:03.160 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:06:18.033 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:06:18.033 Creating mk/config.mk...done. 00:06:18.033 Creating mk/cc.flags.mk...done. 00:06:18.033 Type 'make' to build. 00:06:18.033 12:05:12 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:06:18.033 12:05:12 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:06:18.033 12:05:12 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:06:18.033 12:05:12 -- common/autotest_common.sh@10 -- $ set +x 00:06:18.033 ************************************ 00:06:18.033 START TEST make 00:06:18.033 ************************************ 00:06:18.033 12:05:12 make -- common/autotest_common.sh@1129 -- $ make -j10 00:06:18.033 make[1]: Nothing to be done for 'all'. 00:06:36.209 The Meson build system 00:06:36.209 Version: 1.5.0 00:06:36.209 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:06:36.209 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:06:36.209 Build type: native build 00:06:36.209 Program cat found: YES (/usr/bin/cat) 00:06:36.209 Project name: DPDK 00:06:36.209 Project version: 24.03.0 00:06:36.209 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:06:36.209 C linker for the host machine: cc ld.bfd 2.40-14 00:06:36.209 Host machine cpu family: x86_64 00:06:36.209 Host machine cpu: x86_64 00:06:36.209 Message: ## Building in Developer Mode ## 00:06:36.209 Program pkg-config found: YES (/usr/bin/pkg-config) 00:06:36.209 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:06:36.209 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:06:36.209 Program python3 found: YES (/usr/bin/python3) 00:06:36.209 Program cat found: YES (/usr/bin/cat) 00:06:36.209 Compiler for C supports arguments -march=native: YES 00:06:36.209 Checking for size of "void *" : 8 00:06:36.209 Checking for size of "void *" : 8 (cached) 00:06:36.209 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:06:36.209 Library m found: YES 00:06:36.209 Library numa found: YES 00:06:36.209 Has header "numaif.h" : YES 00:06:36.209 Library fdt found: NO 00:06:36.209 Library execinfo found: NO 00:06:36.209 Has header "execinfo.h" : YES 00:06:36.209 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:06:36.209 Run-time dependency libarchive found: NO (tried pkgconfig) 00:06:36.209 Run-time dependency libbsd found: NO (tried pkgconfig) 00:06:36.209 Run-time dependency jansson found: NO (tried pkgconfig) 00:06:36.209 Run-time dependency openssl found: YES 3.1.1 00:06:36.209 Run-time dependency libpcap found: YES 1.10.4 00:06:36.209 Has header "pcap.h" with dependency libpcap: YES 00:06:36.209 Compiler for C supports arguments -Wcast-qual: YES 00:06:36.209 Compiler for C supports arguments -Wdeprecated: YES 00:06:36.209 Compiler for C supports arguments -Wformat: YES 00:06:36.209 Compiler for C supports arguments -Wformat-nonliteral: NO 00:06:36.209 Compiler for C supports arguments -Wformat-security: NO 00:06:36.209 Compiler for C supports arguments -Wmissing-declarations: YES 00:06:36.209 Compiler for C supports arguments -Wmissing-prototypes: YES 00:06:36.209 Compiler for C supports arguments -Wnested-externs: YES 00:06:36.209 Compiler for C supports arguments -Wold-style-definition: YES 00:06:36.209 Compiler for C supports arguments -Wpointer-arith: YES 00:06:36.209 Compiler for C supports arguments -Wsign-compare: YES 00:06:36.209 Compiler for C supports arguments -Wstrict-prototypes: YES 00:06:36.209 Compiler for C supports arguments -Wundef: YES 00:06:36.209 Compiler for C supports arguments -Wwrite-strings: YES 00:06:36.209 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:06:36.209 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:06:36.209 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:06:36.209 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:06:36.209 Program objdump found: YES (/usr/bin/objdump) 00:06:36.209 Compiler for C supports arguments -mavx512f: YES 00:06:36.209 Checking if "AVX512 checking" compiles: YES 00:06:36.209 Fetching value of define "__SSE4_2__" : 1 00:06:36.209 Fetching value of define "__AES__" : 1 00:06:36.209 Fetching value of define "__AVX__" : 1 00:06:36.209 Fetching value of define "__AVX2__" : 1 00:06:36.209 Fetching value of define "__AVX512BW__" : (undefined) 00:06:36.209 Fetching value of define "__AVX512CD__" : (undefined) 00:06:36.209 Fetching value of define "__AVX512DQ__" : (undefined) 00:06:36.209 Fetching value of define "__AVX512F__" : (undefined) 00:06:36.209 Fetching value of define "__AVX512VL__" : (undefined) 00:06:36.209 Fetching value of define "__PCLMUL__" : 1 00:06:36.209 Fetching value of define "__RDRND__" : 1 00:06:36.209 Fetching value of define "__RDSEED__" : 1 00:06:36.209 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:06:36.209 Fetching value of define "__znver1__" : (undefined) 00:06:36.209 Fetching value of define "__znver2__" : (undefined) 00:06:36.209 Fetching value of define "__znver3__" : (undefined) 00:06:36.209 Fetching value of define "__znver4__" : (undefined) 00:06:36.209 Library asan found: YES 00:06:36.209 Compiler for C supports arguments -Wno-format-truncation: YES 00:06:36.209 Message: lib/log: Defining dependency "log" 00:06:36.209 Message: lib/kvargs: Defining dependency "kvargs" 00:06:36.209 Message: lib/telemetry: Defining dependency "telemetry" 00:06:36.209 Library rt found: YES 00:06:36.209 Checking for function "getentropy" : NO 00:06:36.209 Message: lib/eal: Defining dependency "eal" 00:06:36.209 Message: lib/ring: Defining dependency "ring" 00:06:36.209 Message: lib/rcu: Defining dependency "rcu" 00:06:36.209 Message: lib/mempool: Defining dependency "mempool" 00:06:36.209 Message: lib/mbuf: Defining dependency "mbuf" 00:06:36.209 Fetching value of define "__PCLMUL__" : 1 (cached) 00:06:36.209 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:06:36.209 Compiler for C supports arguments -mpclmul: YES 00:06:36.209 Compiler for C supports arguments -maes: YES 00:06:36.209 Compiler for C supports arguments -mavx512f: YES (cached) 00:06:36.209 Compiler for C supports arguments -mavx512bw: YES 00:06:36.209 Compiler for C supports arguments -mavx512dq: YES 00:06:36.209 Compiler for C supports arguments -mavx512vl: YES 00:06:36.209 Compiler for C supports arguments -mvpclmulqdq: YES 00:06:36.209 Compiler for C supports arguments -mavx2: YES 00:06:36.209 Compiler for C supports arguments -mavx: YES 00:06:36.209 Message: lib/net: Defining dependency "net" 00:06:36.209 Message: lib/meter: Defining dependency "meter" 00:06:36.209 Message: lib/ethdev: Defining dependency "ethdev" 00:06:36.209 Message: lib/pci: Defining dependency "pci" 00:06:36.209 Message: lib/cmdline: Defining dependency "cmdline" 00:06:36.209 Message: lib/hash: Defining dependency "hash" 00:06:36.209 Message: lib/timer: Defining dependency "timer" 00:06:36.209 Message: lib/compressdev: Defining dependency "compressdev" 00:06:36.209 Message: lib/cryptodev: Defining dependency "cryptodev" 00:06:36.209 Message: lib/dmadev: Defining dependency "dmadev" 00:06:36.209 Compiler for C supports arguments -Wno-cast-qual: YES 00:06:36.209 Message: lib/power: Defining dependency "power" 00:06:36.209 Message: lib/reorder: Defining dependency "reorder" 00:06:36.209 Message: lib/security: Defining dependency "security" 00:06:36.209 Has header "linux/userfaultfd.h" : YES 00:06:36.209 Has header "linux/vduse.h" : YES 00:06:36.209 Message: lib/vhost: Defining dependency "vhost" 00:06:36.209 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:06:36.209 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:06:36.209 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:06:36.209 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:06:36.209 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:06:36.209 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:06:36.209 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:06:36.209 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:06:36.209 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:06:36.209 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:06:36.209 Program doxygen found: YES (/usr/local/bin/doxygen) 00:06:36.209 Configuring doxy-api-html.conf using configuration 00:06:36.209 Configuring doxy-api-man.conf using configuration 00:06:36.209 Program mandb found: YES (/usr/bin/mandb) 00:06:36.209 Program sphinx-build found: NO 00:06:36.209 Configuring rte_build_config.h using configuration 00:06:36.209 Message: 00:06:36.209 ================= 00:06:36.209 Applications Enabled 00:06:36.209 ================= 00:06:36.209 00:06:36.209 apps: 00:06:36.209 00:06:36.209 00:06:36.209 Message: 00:06:36.209 ================= 00:06:36.209 Libraries Enabled 00:06:36.209 ================= 00:06:36.209 00:06:36.209 libs: 00:06:36.209 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:06:36.209 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:06:36.209 cryptodev, dmadev, power, reorder, security, vhost, 00:06:36.209 00:06:36.209 Message: 00:06:36.209 =============== 00:06:36.209 Drivers Enabled 00:06:36.209 =============== 00:06:36.209 00:06:36.209 common: 00:06:36.209 00:06:36.209 bus: 00:06:36.209 pci, vdev, 00:06:36.209 mempool: 00:06:36.209 ring, 00:06:36.209 dma: 00:06:36.209 00:06:36.209 net: 00:06:36.209 00:06:36.209 crypto: 00:06:36.209 00:06:36.209 compress: 00:06:36.209 00:06:36.209 vdpa: 00:06:36.209 00:06:36.209 00:06:36.209 Message: 00:06:36.209 ================= 00:06:36.209 Content Skipped 00:06:36.209 ================= 00:06:36.209 00:06:36.209 apps: 00:06:36.209 dumpcap: explicitly disabled via build config 00:06:36.209 graph: explicitly disabled via build config 00:06:36.209 pdump: explicitly disabled via build config 00:06:36.209 proc-info: explicitly disabled via build config 00:06:36.209 test-acl: explicitly disabled via build config 00:06:36.209 test-bbdev: explicitly disabled via build config 00:06:36.209 test-cmdline: explicitly disabled via build config 00:06:36.209 test-compress-perf: explicitly disabled via build config 00:06:36.209 test-crypto-perf: explicitly disabled via build config 00:06:36.209 test-dma-perf: explicitly disabled via build config 00:06:36.209 test-eventdev: explicitly disabled via build config 00:06:36.209 test-fib: explicitly disabled via build config 00:06:36.210 test-flow-perf: explicitly disabled via build config 00:06:36.210 test-gpudev: explicitly disabled via build config 00:06:36.210 test-mldev: explicitly disabled via build config 00:06:36.210 test-pipeline: explicitly disabled via build config 00:06:36.210 test-pmd: explicitly disabled via build config 00:06:36.210 test-regex: explicitly disabled via build config 00:06:36.210 test-sad: explicitly disabled via build config 00:06:36.210 test-security-perf: explicitly disabled via build config 00:06:36.210 00:06:36.210 libs: 00:06:36.210 argparse: explicitly disabled via build config 00:06:36.210 metrics: explicitly disabled via build config 00:06:36.210 acl: explicitly disabled via build config 00:06:36.210 bbdev: explicitly disabled via build config 00:06:36.210 bitratestats: explicitly disabled via build config 00:06:36.210 bpf: explicitly disabled via build config 00:06:36.210 cfgfile: explicitly disabled via build config 00:06:36.210 distributor: explicitly disabled via build config 00:06:36.210 efd: explicitly disabled via build config 00:06:36.210 eventdev: explicitly disabled via build config 00:06:36.210 dispatcher: explicitly disabled via build config 00:06:36.210 gpudev: explicitly disabled via build config 00:06:36.210 gro: explicitly disabled via build config 00:06:36.210 gso: explicitly disabled via build config 00:06:36.210 ip_frag: explicitly disabled via build config 00:06:36.210 jobstats: explicitly disabled via build config 00:06:36.210 latencystats: explicitly disabled via build config 00:06:36.210 lpm: explicitly disabled via build config 00:06:36.210 member: explicitly disabled via build config 00:06:36.210 pcapng: explicitly disabled via build config 00:06:36.210 rawdev: explicitly disabled via build config 00:06:36.210 regexdev: explicitly disabled via build config 00:06:36.210 mldev: explicitly disabled via build config 00:06:36.210 rib: explicitly disabled via build config 00:06:36.210 sched: explicitly disabled via build config 00:06:36.210 stack: explicitly disabled via build config 00:06:36.210 ipsec: explicitly disabled via build config 00:06:36.210 pdcp: explicitly disabled via build config 00:06:36.210 fib: explicitly disabled via build config 00:06:36.210 port: explicitly disabled via build config 00:06:36.210 pdump: explicitly disabled via build config 00:06:36.210 table: explicitly disabled via build config 00:06:36.210 pipeline: explicitly disabled via build config 00:06:36.210 graph: explicitly disabled via build config 00:06:36.210 node: explicitly disabled via build config 00:06:36.210 00:06:36.210 drivers: 00:06:36.210 common/cpt: not in enabled drivers build config 00:06:36.210 common/dpaax: not in enabled drivers build config 00:06:36.210 common/iavf: not in enabled drivers build config 00:06:36.210 common/idpf: not in enabled drivers build config 00:06:36.210 common/ionic: not in enabled drivers build config 00:06:36.210 common/mvep: not in enabled drivers build config 00:06:36.210 common/octeontx: not in enabled drivers build config 00:06:36.210 bus/auxiliary: not in enabled drivers build config 00:06:36.210 bus/cdx: not in enabled drivers build config 00:06:36.210 bus/dpaa: not in enabled drivers build config 00:06:36.210 bus/fslmc: not in enabled drivers build config 00:06:36.210 bus/ifpga: not in enabled drivers build config 00:06:36.210 bus/platform: not in enabled drivers build config 00:06:36.210 bus/uacce: not in enabled drivers build config 00:06:36.210 bus/vmbus: not in enabled drivers build config 00:06:36.210 common/cnxk: not in enabled drivers build config 00:06:36.210 common/mlx5: not in enabled drivers build config 00:06:36.210 common/nfp: not in enabled drivers build config 00:06:36.210 common/nitrox: not in enabled drivers build config 00:06:36.210 common/qat: not in enabled drivers build config 00:06:36.210 common/sfc_efx: not in enabled drivers build config 00:06:36.210 mempool/bucket: not in enabled drivers build config 00:06:36.210 mempool/cnxk: not in enabled drivers build config 00:06:36.210 mempool/dpaa: not in enabled drivers build config 00:06:36.210 mempool/dpaa2: not in enabled drivers build config 00:06:36.210 mempool/octeontx: not in enabled drivers build config 00:06:36.210 mempool/stack: not in enabled drivers build config 00:06:36.210 dma/cnxk: not in enabled drivers build config 00:06:36.210 dma/dpaa: not in enabled drivers build config 00:06:36.210 dma/dpaa2: not in enabled drivers build config 00:06:36.210 dma/hisilicon: not in enabled drivers build config 00:06:36.210 dma/idxd: not in enabled drivers build config 00:06:36.210 dma/ioat: not in enabled drivers build config 00:06:36.210 dma/skeleton: not in enabled drivers build config 00:06:36.210 net/af_packet: not in enabled drivers build config 00:06:36.210 net/af_xdp: not in enabled drivers build config 00:06:36.210 net/ark: not in enabled drivers build config 00:06:36.210 net/atlantic: not in enabled drivers build config 00:06:36.210 net/avp: not in enabled drivers build config 00:06:36.210 net/axgbe: not in enabled drivers build config 00:06:36.210 net/bnx2x: not in enabled drivers build config 00:06:36.210 net/bnxt: not in enabled drivers build config 00:06:36.210 net/bonding: not in enabled drivers build config 00:06:36.210 net/cnxk: not in enabled drivers build config 00:06:36.210 net/cpfl: not in enabled drivers build config 00:06:36.210 net/cxgbe: not in enabled drivers build config 00:06:36.210 net/dpaa: not in enabled drivers build config 00:06:36.210 net/dpaa2: not in enabled drivers build config 00:06:36.210 net/e1000: not in enabled drivers build config 00:06:36.210 net/ena: not in enabled drivers build config 00:06:36.210 net/enetc: not in enabled drivers build config 00:06:36.210 net/enetfec: not in enabled drivers build config 00:06:36.210 net/enic: not in enabled drivers build config 00:06:36.210 net/failsafe: not in enabled drivers build config 00:06:36.210 net/fm10k: not in enabled drivers build config 00:06:36.210 net/gve: not in enabled drivers build config 00:06:36.210 net/hinic: not in enabled drivers build config 00:06:36.210 net/hns3: not in enabled drivers build config 00:06:36.210 net/i40e: not in enabled drivers build config 00:06:36.210 net/iavf: not in enabled drivers build config 00:06:36.210 net/ice: not in enabled drivers build config 00:06:36.210 net/idpf: not in enabled drivers build config 00:06:36.210 net/igc: not in enabled drivers build config 00:06:36.210 net/ionic: not in enabled drivers build config 00:06:36.210 net/ipn3ke: not in enabled drivers build config 00:06:36.210 net/ixgbe: not in enabled drivers build config 00:06:36.210 net/mana: not in enabled drivers build config 00:06:36.210 net/memif: not in enabled drivers build config 00:06:36.210 net/mlx4: not in enabled drivers build config 00:06:36.210 net/mlx5: not in enabled drivers build config 00:06:36.210 net/mvneta: not in enabled drivers build config 00:06:36.210 net/mvpp2: not in enabled drivers build config 00:06:36.210 net/netvsc: not in enabled drivers build config 00:06:36.210 net/nfb: not in enabled drivers build config 00:06:36.210 net/nfp: not in enabled drivers build config 00:06:36.210 net/ngbe: not in enabled drivers build config 00:06:36.210 net/null: not in enabled drivers build config 00:06:36.210 net/octeontx: not in enabled drivers build config 00:06:36.210 net/octeon_ep: not in enabled drivers build config 00:06:36.210 net/pcap: not in enabled drivers build config 00:06:36.210 net/pfe: not in enabled drivers build config 00:06:36.210 net/qede: not in enabled drivers build config 00:06:36.210 net/ring: not in enabled drivers build config 00:06:36.210 net/sfc: not in enabled drivers build config 00:06:36.210 net/softnic: not in enabled drivers build config 00:06:36.210 net/tap: not in enabled drivers build config 00:06:36.210 net/thunderx: not in enabled drivers build config 00:06:36.210 net/txgbe: not in enabled drivers build config 00:06:36.210 net/vdev_netvsc: not in enabled drivers build config 00:06:36.210 net/vhost: not in enabled drivers build config 00:06:36.210 net/virtio: not in enabled drivers build config 00:06:36.210 net/vmxnet3: not in enabled drivers build config 00:06:36.210 raw/*: missing internal dependency, "rawdev" 00:06:36.210 crypto/armv8: not in enabled drivers build config 00:06:36.210 crypto/bcmfs: not in enabled drivers build config 00:06:36.210 crypto/caam_jr: not in enabled drivers build config 00:06:36.210 crypto/ccp: not in enabled drivers build config 00:06:36.210 crypto/cnxk: not in enabled drivers build config 00:06:36.210 crypto/dpaa_sec: not in enabled drivers build config 00:06:36.210 crypto/dpaa2_sec: not in enabled drivers build config 00:06:36.210 crypto/ipsec_mb: not in enabled drivers build config 00:06:36.210 crypto/mlx5: not in enabled drivers build config 00:06:36.210 crypto/mvsam: not in enabled drivers build config 00:06:36.210 crypto/nitrox: not in enabled drivers build config 00:06:36.210 crypto/null: not in enabled drivers build config 00:06:36.210 crypto/octeontx: not in enabled drivers build config 00:06:36.210 crypto/openssl: not in enabled drivers build config 00:06:36.210 crypto/scheduler: not in enabled drivers build config 00:06:36.210 crypto/uadk: not in enabled drivers build config 00:06:36.210 crypto/virtio: not in enabled drivers build config 00:06:36.210 compress/isal: not in enabled drivers build config 00:06:36.210 compress/mlx5: not in enabled drivers build config 00:06:36.210 compress/nitrox: not in enabled drivers build config 00:06:36.210 compress/octeontx: not in enabled drivers build config 00:06:36.210 compress/zlib: not in enabled drivers build config 00:06:36.210 regex/*: missing internal dependency, "regexdev" 00:06:36.210 ml/*: missing internal dependency, "mldev" 00:06:36.210 vdpa/ifc: not in enabled drivers build config 00:06:36.210 vdpa/mlx5: not in enabled drivers build config 00:06:36.210 vdpa/nfp: not in enabled drivers build config 00:06:36.210 vdpa/sfc: not in enabled drivers build config 00:06:36.210 event/*: missing internal dependency, "eventdev" 00:06:36.210 baseband/*: missing internal dependency, "bbdev" 00:06:36.210 gpu/*: missing internal dependency, "gpudev" 00:06:36.210 00:06:36.210 00:06:36.210 Build targets in project: 85 00:06:36.210 00:06:36.210 DPDK 24.03.0 00:06:36.210 00:06:36.210 User defined options 00:06:36.210 buildtype : debug 00:06:36.210 default_library : shared 00:06:36.210 libdir : lib 00:06:36.210 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:06:36.210 b_sanitize : address 00:06:36.210 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:06:36.210 c_link_args : 00:06:36.210 cpu_instruction_set: native 00:06:36.211 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:06:36.211 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:06:36.211 enable_docs : false 00:06:36.211 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:06:36.211 enable_kmods : false 00:06:36.211 max_lcores : 128 00:06:36.211 tests : false 00:06:36.211 00:06:36.211 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:06:36.211 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:06:36.211 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:06:36.211 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:06:36.211 [3/268] Linking static target lib/librte_kvargs.a 00:06:36.211 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:06:36.211 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:06:36.211 [6/268] Linking static target lib/librte_log.a 00:06:36.211 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:06:36.211 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:06:36.211 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:06:36.211 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:06:36.211 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:06:36.211 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:06:36.211 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:06:36.211 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:06:36.211 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:06:36.211 [16/268] Linking static target lib/librte_telemetry.a 00:06:36.211 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:06:36.211 [18/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:06:36.211 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:06:36.211 [20/268] Linking target lib/librte_log.so.24.1 00:06:36.469 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:06:36.469 [22/268] Linking target lib/librte_kvargs.so.24.1 00:06:36.469 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:06:36.469 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:06:36.469 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:06:36.469 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:06:36.727 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:06:36.727 [28/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:06:36.727 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:06:36.985 [30/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:06:36.985 [31/268] Linking target lib/librte_telemetry.so.24.1 00:06:36.985 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:06:36.985 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:06:37.243 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:06:37.243 [35/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:06:37.243 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:06:37.243 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:06:37.502 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:06:37.502 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:06:37.502 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:06:37.761 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:06:37.761 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:06:37.761 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:06:38.020 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:06:38.278 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:06:38.278 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:06:38.278 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:06:38.537 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:06:38.537 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:06:38.796 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:06:38.796 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:06:39.054 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:06:39.054 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:06:39.054 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:06:39.313 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:06:39.313 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:06:39.313 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:06:39.571 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:06:39.829 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:06:39.829 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:06:39.829 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:06:39.829 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:06:40.088 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:06:40.088 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:06:40.346 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:06:40.346 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:06:40.346 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:06:40.604 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:06:40.604 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:06:40.862 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:06:40.862 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:06:40.862 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:06:40.862 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:06:40.862 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:06:41.121 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:06:41.121 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:06:41.121 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:06:41.121 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:06:41.121 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:06:41.121 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:06:41.379 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:06:41.379 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:06:41.946 [83/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:06:41.946 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:06:41.946 [85/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:06:42.205 [86/268] Linking static target lib/librte_eal.a 00:06:42.205 [87/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:06:42.205 [88/268] Linking static target lib/librte_rcu.a 00:06:42.205 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:06:42.205 [90/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:06:42.205 [91/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:06:42.205 [92/268] Linking static target lib/librte_ring.a 00:06:42.205 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:06:42.205 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:06:42.205 [95/268] Linking static target lib/librte_mempool.a 00:06:42.772 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:06:42.772 [97/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:06:42.772 [98/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:06:42.772 [99/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:06:42.772 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:06:42.772 [101/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:06:43.030 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:06:43.030 [103/268] Linking static target lib/librte_mbuf.a 00:06:43.287 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:06:43.287 [105/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:06:43.287 [106/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:06:43.287 [107/268] Linking static target lib/librte_meter.a 00:06:43.287 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:06:43.544 [109/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:06:43.802 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:06:43.802 [111/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:06:43.802 [112/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:06:43.802 [113/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:06:43.802 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:06:43.802 [115/268] Linking static target lib/librte_net.a 00:06:44.059 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:06:44.317 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:06:44.317 [118/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:06:44.574 [119/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:06:44.574 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:06:44.574 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:06:44.832 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:06:45.399 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:06:45.399 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:06:45.399 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:06:45.399 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:06:45.399 [127/268] Linking static target lib/librte_pci.a 00:06:45.656 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:06:45.656 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:06:45.656 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:06:45.656 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:06:45.656 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:06:45.656 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:06:45.913 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:06:45.914 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:06:45.914 [136/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:06:45.914 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:06:45.914 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:06:45.914 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:06:45.914 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:06:45.914 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:06:45.914 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:06:46.171 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:06:46.171 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:06:46.172 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:06:46.430 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:06:46.430 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:06:46.430 [148/268] Linking static target lib/librte_cmdline.a 00:06:46.688 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:06:46.946 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:06:46.946 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:06:46.947 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:06:47.205 [153/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:06:47.463 [154/268] Linking static target lib/librte_timer.a 00:06:47.464 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:06:47.464 [156/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:06:47.464 [157/268] Linking static target lib/librte_ethdev.a 00:06:47.722 [158/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:06:47.722 [159/268] Linking static target lib/librte_hash.a 00:06:47.722 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:06:47.999 [161/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:06:47.999 [162/268] Linking static target lib/librte_compressdev.a 00:06:47.999 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:06:47.999 [164/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:06:47.999 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:06:48.270 [166/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:06:48.270 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:06:48.270 [168/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:06:48.528 [169/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:06:48.528 [170/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:06:48.786 [171/268] Linking static target lib/librte_dmadev.a 00:06:48.786 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:06:48.786 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:06:49.045 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:49.045 [175/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:06:49.045 [176/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:06:49.045 [177/268] Linking static target lib/librte_cryptodev.a 00:06:49.045 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:06:49.303 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:06:49.561 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:06:49.561 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:06:49.561 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:06:49.561 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:06:49.561 [184/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:49.820 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:06:50.078 [186/268] Linking static target lib/librte_power.a 00:06:50.078 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:06:50.078 [188/268] Linking static target lib/librte_reorder.a 00:06:50.336 [189/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:06:50.336 [190/268] Linking static target lib/librte_security.a 00:06:50.336 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:06:50.336 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:06:50.594 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:06:50.594 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:06:50.853 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:06:51.111 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:06:51.370 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:06:51.370 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:06:51.628 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:06:51.628 [200/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:51.628 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:06:51.886 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:06:51.886 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:06:51.886 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:06:52.145 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:06:52.145 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:06:52.403 [207/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:06:52.403 [208/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:06:52.661 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:06:52.661 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:06:52.661 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:06:52.661 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:06:52.661 [213/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:06:52.661 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:06:52.661 [215/268] Linking static target drivers/librte_bus_vdev.a 00:06:52.920 [216/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:06:52.920 [217/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:06:52.920 [218/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:06:52.920 [219/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:06:52.920 [220/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:06:52.920 [221/268] Linking static target drivers/librte_bus_pci.a 00:06:53.178 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:53.178 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:06:53.178 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:06:53.178 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:06:53.178 [226/268] Linking static target drivers/librte_mempool_ring.a 00:06:53.436 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:06:54.370 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:06:54.370 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:06:54.370 [230/268] Linking target lib/librte_eal.so.24.1 00:06:54.629 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:06:54.629 [232/268] Linking target lib/librte_ring.so.24.1 00:06:54.629 [233/268] Linking target lib/librte_meter.so.24.1 00:06:54.629 [234/268] Linking target lib/librte_pci.so.24.1 00:06:54.629 [235/268] Linking target drivers/librte_bus_vdev.so.24.1 00:06:54.629 [236/268] Linking target lib/librte_dmadev.so.24.1 00:06:54.629 [237/268] Linking target lib/librte_timer.so.24.1 00:06:54.888 [238/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:06:54.888 [239/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:06:54.888 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:06:54.888 [241/268] Linking target lib/librte_rcu.so.24.1 00:06:54.888 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:06:54.888 [243/268] Linking target lib/librte_mempool.so.24.1 00:06:54.888 [244/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:06:54.888 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:06:54.888 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:06:54.888 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:06:55.146 [248/268] Linking target lib/librte_mbuf.so.24.1 00:06:55.146 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:06:55.146 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:06:55.146 [251/268] Linking target lib/librte_reorder.so.24.1 00:06:55.146 [252/268] Linking target lib/librte_cryptodev.so.24.1 00:06:55.146 [253/268] Linking target lib/librte_compressdev.so.24.1 00:06:55.146 [254/268] Linking target lib/librte_net.so.24.1 00:06:55.464 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:06:55.464 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:06:55.464 [257/268] Linking target lib/librte_hash.so.24.1 00:06:55.464 [258/268] Linking target lib/librte_cmdline.so.24.1 00:06:55.464 [259/268] Linking target lib/librte_security.so.24.1 00:06:55.722 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:06:55.722 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:55.981 [262/268] Linking target lib/librte_ethdev.so.24.1 00:06:55.981 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:06:56.239 [264/268] Linking target lib/librte_power.so.24.1 00:07:00.428 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:07:00.428 [266/268] Linking static target lib/librte_vhost.a 00:07:01.804 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:07:01.804 [268/268] Linking target lib/librte_vhost.so.24.1 00:07:01.804 INFO: autodetecting backend as ninja 00:07:01.804 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:07:23.802 CC lib/log/log_flags.o 00:07:23.802 CC lib/log/log.o 00:07:23.802 CC lib/ut_mock/mock.o 00:07:23.802 CC lib/log/log_deprecated.o 00:07:23.802 CC lib/ut/ut.o 00:07:23.802 LIB libspdk_ut_mock.a 00:07:23.802 SO libspdk_ut_mock.so.6.0 00:07:23.802 LIB libspdk_log.a 00:07:23.802 LIB libspdk_ut.a 00:07:23.802 SO libspdk_log.so.7.1 00:07:23.802 SYMLINK libspdk_ut_mock.so 00:07:23.802 SO libspdk_ut.so.2.0 00:07:23.802 SYMLINK libspdk_ut.so 00:07:23.802 SYMLINK libspdk_log.so 00:07:23.802 CC lib/dma/dma.o 00:07:23.802 CC lib/util/base64.o 00:07:23.802 CC lib/util/bit_array.o 00:07:23.802 CC lib/util/cpuset.o 00:07:23.802 CC lib/util/crc16.o 00:07:23.802 CC lib/ioat/ioat.o 00:07:23.802 CC lib/util/crc32c.o 00:07:23.802 CC lib/util/crc32.o 00:07:23.802 CXX lib/trace_parser/trace.o 00:07:23.802 CC lib/vfio_user/host/vfio_user_pci.o 00:07:23.802 CC lib/util/crc32_ieee.o 00:07:23.802 CC lib/util/crc64.o 00:07:23.802 CC lib/util/dif.o 00:07:23.802 CC lib/util/fd.o 00:07:23.802 LIB libspdk_dma.a 00:07:23.802 CC lib/util/fd_group.o 00:07:23.802 SO libspdk_dma.so.5.0 00:07:23.802 CC lib/util/file.o 00:07:23.802 CC lib/vfio_user/host/vfio_user.o 00:07:23.802 CC lib/util/hexlify.o 00:07:23.802 SYMLINK libspdk_dma.so 00:07:23.802 CC lib/util/iov.o 00:07:23.802 CC lib/util/math.o 00:07:23.802 CC lib/util/net.o 00:07:23.802 CC lib/util/pipe.o 00:07:23.802 LIB libspdk_ioat.a 00:07:23.802 CC lib/util/strerror_tls.o 00:07:23.802 SO libspdk_ioat.so.7.0 00:07:23.802 LIB libspdk_vfio_user.a 00:07:23.802 SO libspdk_vfio_user.so.5.0 00:07:23.802 SYMLINK libspdk_ioat.so 00:07:23.802 CC lib/util/string.o 00:07:23.802 CC lib/util/uuid.o 00:07:23.802 SYMLINK libspdk_vfio_user.so 00:07:23.802 CC lib/util/xor.o 00:07:23.802 CC lib/util/zipf.o 00:07:23.802 CC lib/util/md5.o 00:07:23.802 LIB libspdk_util.a 00:07:23.802 SO libspdk_util.so.10.1 00:07:23.802 LIB libspdk_trace_parser.a 00:07:23.802 SO libspdk_trace_parser.so.6.0 00:07:23.802 SYMLINK libspdk_util.so 00:07:24.062 SYMLINK libspdk_trace_parser.so 00:07:24.062 CC lib/json/json_parse.o 00:07:24.062 CC lib/env_dpdk/env.o 00:07:24.062 CC lib/env_dpdk/memory.o 00:07:24.062 CC lib/json/json_util.o 00:07:24.062 CC lib/env_dpdk/pci.o 00:07:24.062 CC lib/json/json_write.o 00:07:24.062 CC lib/idxd/idxd.o 00:07:24.062 CC lib/rdma_utils/rdma_utils.o 00:07:24.062 CC lib/vmd/vmd.o 00:07:24.062 CC lib/conf/conf.o 00:07:24.323 CC lib/vmd/led.o 00:07:24.323 CC lib/env_dpdk/init.o 00:07:24.583 CC lib/idxd/idxd_user.o 00:07:24.583 LIB libspdk_conf.a 00:07:24.583 LIB libspdk_rdma_utils.a 00:07:24.583 LIB libspdk_json.a 00:07:24.583 SO libspdk_conf.so.6.0 00:07:24.583 SO libspdk_rdma_utils.so.1.0 00:07:24.583 SO libspdk_json.so.6.0 00:07:24.583 SYMLINK libspdk_conf.so 00:07:24.583 SYMLINK libspdk_rdma_utils.so 00:07:24.583 CC lib/env_dpdk/threads.o 00:07:24.842 CC lib/env_dpdk/pci_ioat.o 00:07:24.842 SYMLINK libspdk_json.so 00:07:24.842 CC lib/env_dpdk/pci_virtio.o 00:07:24.842 CC lib/env_dpdk/pci_vmd.o 00:07:24.842 CC lib/env_dpdk/pci_idxd.o 00:07:24.842 CC lib/env_dpdk/pci_event.o 00:07:24.842 CC lib/env_dpdk/sigbus_handler.o 00:07:25.100 CC lib/env_dpdk/pci_dpdk.o 00:07:25.100 LIB libspdk_vmd.a 00:07:25.100 CC lib/idxd/idxd_kernel.o 00:07:25.100 SO libspdk_vmd.so.6.0 00:07:25.100 CC lib/env_dpdk/pci_dpdk_2207.o 00:07:25.100 SYMLINK libspdk_vmd.so 00:07:25.100 CC lib/env_dpdk/pci_dpdk_2211.o 00:07:25.360 CC lib/jsonrpc/jsonrpc_server.o 00:07:25.360 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:07:25.360 CC lib/jsonrpc/jsonrpc_client.o 00:07:25.360 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:07:25.360 CC lib/rdma_provider/common.o 00:07:25.360 CC lib/rdma_provider/rdma_provider_verbs.o 00:07:25.621 LIB libspdk_idxd.a 00:07:25.621 SO libspdk_idxd.so.12.1 00:07:25.621 SYMLINK libspdk_idxd.so 00:07:25.882 LIB libspdk_jsonrpc.a 00:07:25.882 LIB libspdk_rdma_provider.a 00:07:25.882 SO libspdk_jsonrpc.so.6.0 00:07:25.882 SO libspdk_rdma_provider.so.7.0 00:07:25.882 SYMLINK libspdk_rdma_provider.so 00:07:25.882 SYMLINK libspdk_jsonrpc.so 00:07:26.143 CC lib/rpc/rpc.o 00:07:26.403 LIB libspdk_env_dpdk.a 00:07:26.403 SO libspdk_env_dpdk.so.15.1 00:07:26.663 LIB libspdk_rpc.a 00:07:26.663 SYMLINK libspdk_env_dpdk.so 00:07:26.663 SO libspdk_rpc.so.6.0 00:07:26.663 SYMLINK libspdk_rpc.so 00:07:26.922 CC lib/keyring/keyring_rpc.o 00:07:26.922 CC lib/keyring/keyring.o 00:07:26.922 CC lib/notify/notify.o 00:07:26.922 CC lib/notify/notify_rpc.o 00:07:26.922 CC lib/trace/trace.o 00:07:26.922 CC lib/trace/trace_flags.o 00:07:26.922 CC lib/trace/trace_rpc.o 00:07:27.185 LIB libspdk_notify.a 00:07:27.185 SO libspdk_notify.so.6.0 00:07:27.185 LIB libspdk_keyring.a 00:07:27.185 SYMLINK libspdk_notify.so 00:07:27.185 SO libspdk_keyring.so.2.0 00:07:27.185 LIB libspdk_trace.a 00:07:27.443 SO libspdk_trace.so.11.0 00:07:27.443 SYMLINK libspdk_keyring.so 00:07:27.443 SYMLINK libspdk_trace.so 00:07:27.701 CC lib/sock/sock_rpc.o 00:07:27.701 CC lib/sock/sock.o 00:07:27.701 CC lib/thread/thread.o 00:07:27.701 CC lib/thread/iobuf.o 00:07:28.268 LIB libspdk_sock.a 00:07:28.268 SO libspdk_sock.so.10.0 00:07:28.268 SYMLINK libspdk_sock.so 00:07:28.527 CC lib/nvme/nvme_ctrlr.o 00:07:28.527 CC lib/nvme/nvme_ctrlr_cmd.o 00:07:28.527 CC lib/nvme/nvme_fabric.o 00:07:28.527 CC lib/nvme/nvme_ns_cmd.o 00:07:28.527 CC lib/nvme/nvme_pcie.o 00:07:28.527 CC lib/nvme/nvme_ns.o 00:07:28.527 CC lib/nvme/nvme_qpair.o 00:07:28.527 CC lib/nvme/nvme_pcie_common.o 00:07:28.527 CC lib/nvme/nvme.o 00:07:29.460 CC lib/nvme/nvme_quirks.o 00:07:29.719 CC lib/nvme/nvme_transport.o 00:07:29.719 LIB libspdk_thread.a 00:07:29.719 SO libspdk_thread.so.11.0 00:07:29.719 CC lib/nvme/nvme_discovery.o 00:07:29.977 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:07:29.977 SYMLINK libspdk_thread.so 00:07:29.977 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:07:29.977 CC lib/nvme/nvme_tcp.o 00:07:30.236 CC lib/nvme/nvme_opal.o 00:07:30.236 CC lib/accel/accel.o 00:07:30.236 CC lib/nvme/nvme_io_msg.o 00:07:30.494 CC lib/nvme/nvme_poll_group.o 00:07:30.494 CC lib/accel/accel_rpc.o 00:07:30.751 CC lib/accel/accel_sw.o 00:07:30.751 CC lib/blob/blobstore.o 00:07:30.751 CC lib/nvme/nvme_zns.o 00:07:31.009 CC lib/init/json_config.o 00:07:31.009 CC lib/nvme/nvme_stubs.o 00:07:31.009 CC lib/init/subsystem.o 00:07:31.267 CC lib/init/subsystem_rpc.o 00:07:31.267 CC lib/init/rpc.o 00:07:31.267 CC lib/blob/request.o 00:07:31.267 CC lib/blob/zeroes.o 00:07:31.526 CC lib/blob/blob_bs_dev.o 00:07:31.526 CC lib/nvme/nvme_auth.o 00:07:31.785 LIB libspdk_init.a 00:07:31.785 CC lib/nvme/nvme_cuse.o 00:07:31.785 SO libspdk_init.so.6.0 00:07:31.785 CC lib/nvme/nvme_rdma.o 00:07:31.785 SYMLINK libspdk_init.so 00:07:31.785 LIB libspdk_accel.a 00:07:31.785 CC lib/virtio/virtio.o 00:07:31.785 CC lib/virtio/virtio_vhost_user.o 00:07:31.785 SO libspdk_accel.so.16.0 00:07:31.785 CC lib/virtio/virtio_vfio_user.o 00:07:32.044 CC lib/fsdev/fsdev.o 00:07:32.044 SYMLINK libspdk_accel.so 00:07:32.044 CC lib/fsdev/fsdev_io.o 00:07:32.302 CC lib/virtio/virtio_pci.o 00:07:32.561 CC lib/event/app.o 00:07:32.561 CC lib/fsdev/fsdev_rpc.o 00:07:32.561 CC lib/event/reactor.o 00:07:32.561 LIB libspdk_virtio.a 00:07:32.561 CC lib/bdev/bdev.o 00:07:32.561 SO libspdk_virtio.so.7.0 00:07:32.561 CC lib/event/log_rpc.o 00:07:32.820 SYMLINK libspdk_virtio.so 00:07:32.820 CC lib/event/app_rpc.o 00:07:32.820 CC lib/event/scheduler_static.o 00:07:32.820 CC lib/bdev/bdev_rpc.o 00:07:32.820 CC lib/bdev/bdev_zone.o 00:07:33.077 CC lib/bdev/part.o 00:07:33.077 CC lib/bdev/scsi_nvme.o 00:07:33.077 LIB libspdk_fsdev.a 00:07:33.077 SO libspdk_fsdev.so.2.0 00:07:33.077 LIB libspdk_event.a 00:07:33.334 SO libspdk_event.so.14.0 00:07:33.334 SYMLINK libspdk_fsdev.so 00:07:33.334 SYMLINK libspdk_event.so 00:07:33.592 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:07:33.592 LIB libspdk_nvme.a 00:07:33.852 SO libspdk_nvme.so.15.0 00:07:34.175 SYMLINK libspdk_nvme.so 00:07:34.450 LIB libspdk_fuse_dispatcher.a 00:07:34.450 SO libspdk_fuse_dispatcher.so.1.0 00:07:34.450 SYMLINK libspdk_fuse_dispatcher.so 00:07:35.387 LIB libspdk_blob.a 00:07:35.646 SO libspdk_blob.so.11.0 00:07:35.646 SYMLINK libspdk_blob.so 00:07:35.905 CC lib/blobfs/blobfs.o 00:07:35.905 CC lib/blobfs/tree.o 00:07:35.905 CC lib/lvol/lvol.o 00:07:36.470 LIB libspdk_bdev.a 00:07:36.470 SO libspdk_bdev.so.17.0 00:07:36.728 SYMLINK libspdk_bdev.so 00:07:36.986 CC lib/nbd/nbd.o 00:07:36.986 CC lib/nbd/nbd_rpc.o 00:07:36.986 CC lib/ublk/ublk.o 00:07:36.986 CC lib/ublk/ublk_rpc.o 00:07:36.986 CC lib/scsi/dev.o 00:07:36.986 CC lib/nvmf/ctrlr_discovery.o 00:07:36.986 CC lib/nvmf/ctrlr.o 00:07:36.986 CC lib/ftl/ftl_core.o 00:07:36.986 LIB libspdk_blobfs.a 00:07:37.244 SO libspdk_blobfs.so.10.0 00:07:37.244 CC lib/scsi/lun.o 00:07:37.244 CC lib/scsi/port.o 00:07:37.244 SYMLINK libspdk_blobfs.so 00:07:37.244 CC lib/scsi/scsi.o 00:07:37.244 CC lib/scsi/scsi_bdev.o 00:07:37.244 LIB libspdk_lvol.a 00:07:37.244 SO libspdk_lvol.so.10.0 00:07:37.501 CC lib/nvmf/ctrlr_bdev.o 00:07:37.501 SYMLINK libspdk_lvol.so 00:07:37.501 CC lib/nvmf/subsystem.o 00:07:37.501 CC lib/scsi/scsi_pr.o 00:07:37.501 CC lib/scsi/scsi_rpc.o 00:07:37.501 CC lib/scsi/task.o 00:07:37.501 LIB libspdk_nbd.a 00:07:37.760 CC lib/ftl/ftl_init.o 00:07:37.760 SO libspdk_nbd.so.7.0 00:07:37.760 CC lib/ftl/ftl_layout.o 00:07:37.760 SYMLINK libspdk_nbd.so 00:07:37.760 CC lib/ftl/ftl_debug.o 00:07:37.760 LIB libspdk_ublk.a 00:07:37.760 SO libspdk_ublk.so.3.0 00:07:37.760 CC lib/nvmf/nvmf.o 00:07:37.760 CC lib/nvmf/nvmf_rpc.o 00:07:37.760 SYMLINK libspdk_ublk.so 00:07:37.760 CC lib/nvmf/transport.o 00:07:37.760 CC lib/nvmf/tcp.o 00:07:37.760 LIB libspdk_scsi.a 00:07:38.020 SO libspdk_scsi.so.9.0 00:07:38.020 CC lib/ftl/ftl_io.o 00:07:38.020 SYMLINK libspdk_scsi.so 00:07:38.020 CC lib/ftl/ftl_sb.o 00:07:38.278 CC lib/ftl/ftl_l2p.o 00:07:38.278 CC lib/ftl/ftl_l2p_flat.o 00:07:38.278 CC lib/ftl/ftl_nv_cache.o 00:07:38.278 CC lib/nvmf/stubs.o 00:07:38.538 CC lib/nvmf/mdns_server.o 00:07:38.538 CC lib/ftl/ftl_band.o 00:07:38.797 CC lib/nvmf/rdma.o 00:07:38.797 CC lib/nvmf/auth.o 00:07:39.057 CC lib/ftl/ftl_band_ops.o 00:07:39.057 CC lib/ftl/ftl_writer.o 00:07:39.057 CC lib/ftl/ftl_rq.o 00:07:39.316 CC lib/vhost/vhost.o 00:07:39.316 CC lib/ftl/ftl_reloc.o 00:07:39.316 CC lib/iscsi/conn.o 00:07:39.316 CC lib/iscsi/init_grp.o 00:07:39.316 CC lib/ftl/ftl_l2p_cache.o 00:07:39.316 CC lib/vhost/vhost_rpc.o 00:07:39.574 CC lib/vhost/vhost_scsi.o 00:07:39.574 CC lib/iscsi/iscsi.o 00:07:39.833 CC lib/iscsi/param.o 00:07:39.833 CC lib/iscsi/portal_grp.o 00:07:40.093 CC lib/iscsi/tgt_node.o 00:07:40.093 CC lib/ftl/ftl_p2l.o 00:07:40.093 CC lib/ftl/ftl_p2l_log.o 00:07:40.093 CC lib/vhost/vhost_blk.o 00:07:40.093 CC lib/vhost/rte_vhost_user.o 00:07:40.352 CC lib/iscsi/iscsi_subsystem.o 00:07:40.352 CC lib/iscsi/iscsi_rpc.o 00:07:40.352 CC lib/iscsi/task.o 00:07:40.677 CC lib/ftl/mngt/ftl_mngt.o 00:07:40.677 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:07:40.677 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:07:40.677 CC lib/ftl/mngt/ftl_mngt_startup.o 00:07:40.936 CC lib/ftl/mngt/ftl_mngt_md.o 00:07:40.936 CC lib/ftl/mngt/ftl_mngt_misc.o 00:07:40.936 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:07:40.936 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:07:40.936 CC lib/ftl/mngt/ftl_mngt_band.o 00:07:40.936 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:07:41.193 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:07:41.193 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:07:41.193 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:07:41.193 CC lib/ftl/utils/ftl_conf.o 00:07:41.193 CC lib/ftl/utils/ftl_md.o 00:07:41.193 CC lib/ftl/utils/ftl_mempool.o 00:07:41.193 CC lib/ftl/utils/ftl_bitmap.o 00:07:41.451 CC lib/ftl/utils/ftl_property.o 00:07:41.451 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:07:41.451 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:07:41.451 LIB libspdk_vhost.a 00:07:41.451 LIB libspdk_iscsi.a 00:07:41.451 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:07:41.451 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:07:41.711 SO libspdk_vhost.so.8.0 00:07:41.711 SO libspdk_iscsi.so.8.0 00:07:41.711 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:07:41.711 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:07:41.711 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:07:41.711 SYMLINK libspdk_vhost.so 00:07:41.711 CC lib/ftl/upgrade/ftl_sb_v3.o 00:07:41.711 CC lib/ftl/upgrade/ftl_sb_v5.o 00:07:41.711 CC lib/ftl/nvc/ftl_nvc_dev.o 00:07:41.711 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:07:41.711 SYMLINK libspdk_iscsi.so 00:07:41.711 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:07:41.711 LIB libspdk_nvmf.a 00:07:41.711 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:07:41.970 CC lib/ftl/base/ftl_base_dev.o 00:07:41.970 CC lib/ftl/base/ftl_base_bdev.o 00:07:41.970 CC lib/ftl/ftl_trace.o 00:07:41.970 SO libspdk_nvmf.so.20.0 00:07:42.229 LIB libspdk_ftl.a 00:07:42.229 SYMLINK libspdk_nvmf.so 00:07:42.488 SO libspdk_ftl.so.9.0 00:07:42.746 SYMLINK libspdk_ftl.so 00:07:43.317 CC module/env_dpdk/env_dpdk_rpc.o 00:07:43.317 CC module/accel/iaa/accel_iaa.o 00:07:43.317 CC module/sock/posix/posix.o 00:07:43.317 CC module/keyring/file/keyring.o 00:07:43.317 CC module/accel/error/accel_error.o 00:07:43.317 CC module/blob/bdev/blob_bdev.o 00:07:43.317 CC module/accel/ioat/accel_ioat.o 00:07:43.317 CC module/scheduler/dynamic/scheduler_dynamic.o 00:07:43.317 CC module/fsdev/aio/fsdev_aio.o 00:07:43.317 CC module/accel/dsa/accel_dsa.o 00:07:43.317 LIB libspdk_env_dpdk_rpc.a 00:07:43.317 SO libspdk_env_dpdk_rpc.so.6.0 00:07:43.317 SYMLINK libspdk_env_dpdk_rpc.so 00:07:43.317 CC module/fsdev/aio/fsdev_aio_rpc.o 00:07:43.575 CC module/keyring/file/keyring_rpc.o 00:07:43.575 CC module/accel/ioat/accel_ioat_rpc.o 00:07:43.575 LIB libspdk_scheduler_dynamic.a 00:07:43.575 CC module/accel/error/accel_error_rpc.o 00:07:43.575 CC module/accel/iaa/accel_iaa_rpc.o 00:07:43.575 SO libspdk_scheduler_dynamic.so.4.0 00:07:43.575 CC module/fsdev/aio/linux_aio_mgr.o 00:07:43.575 SYMLINK libspdk_scheduler_dynamic.so 00:07:43.575 LIB libspdk_blob_bdev.a 00:07:43.575 LIB libspdk_keyring_file.a 00:07:43.575 SO libspdk_blob_bdev.so.11.0 00:07:43.575 CC module/accel/dsa/accel_dsa_rpc.o 00:07:43.575 LIB libspdk_accel_ioat.a 00:07:43.575 SO libspdk_keyring_file.so.2.0 00:07:43.575 LIB libspdk_accel_error.a 00:07:43.575 SO libspdk_accel_ioat.so.6.0 00:07:43.833 SO libspdk_accel_error.so.2.0 00:07:43.833 SYMLINK libspdk_blob_bdev.so 00:07:43.833 LIB libspdk_accel_iaa.a 00:07:43.833 SYMLINK libspdk_keyring_file.so 00:07:43.833 SO libspdk_accel_iaa.so.3.0 00:07:43.833 SYMLINK libspdk_accel_ioat.so 00:07:43.833 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:07:43.833 SYMLINK libspdk_accel_error.so 00:07:43.833 LIB libspdk_accel_dsa.a 00:07:43.833 SYMLINK libspdk_accel_iaa.so 00:07:43.833 SO libspdk_accel_dsa.so.5.0 00:07:43.833 SYMLINK libspdk_accel_dsa.so 00:07:44.092 CC module/keyring/linux/keyring.o 00:07:44.092 LIB libspdk_scheduler_dpdk_governor.a 00:07:44.092 CC module/scheduler/gscheduler/gscheduler.o 00:07:44.092 SO libspdk_scheduler_dpdk_governor.so.4.0 00:07:44.092 CC module/bdev/delay/vbdev_delay.o 00:07:44.092 CC module/bdev/error/vbdev_error.o 00:07:44.092 CC module/bdev/gpt/gpt.o 00:07:44.092 CC module/blobfs/bdev/blobfs_bdev.o 00:07:44.092 SYMLINK libspdk_scheduler_dpdk_governor.so 00:07:44.092 CC module/bdev/delay/vbdev_delay_rpc.o 00:07:44.092 CC module/bdev/lvol/vbdev_lvol.o 00:07:44.092 CC module/keyring/linux/keyring_rpc.o 00:07:44.092 LIB libspdk_fsdev_aio.a 00:07:44.092 LIB libspdk_scheduler_gscheduler.a 00:07:44.351 SO libspdk_scheduler_gscheduler.so.4.0 00:07:44.351 SO libspdk_fsdev_aio.so.1.0 00:07:44.351 LIB libspdk_keyring_linux.a 00:07:44.351 SYMLINK libspdk_scheduler_gscheduler.so 00:07:44.351 CC module/bdev/gpt/vbdev_gpt.o 00:07:44.351 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:07:44.351 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:07:44.351 LIB libspdk_sock_posix.a 00:07:44.351 SO libspdk_keyring_linux.so.1.0 00:07:44.351 CC module/bdev/error/vbdev_error_rpc.o 00:07:44.351 SYMLINK libspdk_fsdev_aio.so 00:07:44.351 SO libspdk_sock_posix.so.6.0 00:07:44.351 SYMLINK libspdk_keyring_linux.so 00:07:44.351 SYMLINK libspdk_sock_posix.so 00:07:44.608 LIB libspdk_blobfs_bdev.a 00:07:44.608 LIB libspdk_bdev_error.a 00:07:44.608 CC module/bdev/malloc/bdev_malloc.o 00:07:44.608 LIB libspdk_bdev_delay.a 00:07:44.608 SO libspdk_blobfs_bdev.so.6.0 00:07:44.608 SO libspdk_bdev_error.so.6.0 00:07:44.608 SO libspdk_bdev_delay.so.6.0 00:07:44.608 CC module/bdev/null/bdev_null.o 00:07:44.608 CC module/bdev/nvme/bdev_nvme.o 00:07:44.608 SYMLINK libspdk_blobfs_bdev.so 00:07:44.608 CC module/bdev/null/bdev_null_rpc.o 00:07:44.608 SYMLINK libspdk_bdev_error.so 00:07:44.608 CC module/bdev/passthru/vbdev_passthru.o 00:07:44.608 SYMLINK libspdk_bdev_delay.so 00:07:44.608 LIB libspdk_bdev_gpt.a 00:07:44.609 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:07:44.609 SO libspdk_bdev_gpt.so.6.0 00:07:44.867 SYMLINK libspdk_bdev_gpt.so 00:07:44.867 LIB libspdk_bdev_lvol.a 00:07:44.867 CC module/bdev/raid/bdev_raid.o 00:07:44.867 SO libspdk_bdev_lvol.so.6.0 00:07:44.867 CC module/bdev/raid/bdev_raid_rpc.o 00:07:44.867 CC module/bdev/raid/bdev_raid_sb.o 00:07:44.867 SYMLINK libspdk_bdev_lvol.so 00:07:44.867 LIB libspdk_bdev_null.a 00:07:44.867 CC module/bdev/split/vbdev_split.o 00:07:44.867 CC module/bdev/zone_block/vbdev_zone_block.o 00:07:44.867 SO libspdk_bdev_null.so.6.0 00:07:44.867 CC module/bdev/malloc/bdev_malloc_rpc.o 00:07:45.126 LIB libspdk_bdev_passthru.a 00:07:45.126 SYMLINK libspdk_bdev_null.so 00:07:45.126 SO libspdk_bdev_passthru.so.6.0 00:07:45.126 CC module/bdev/aio/bdev_aio.o 00:07:45.126 CC module/bdev/aio/bdev_aio_rpc.o 00:07:45.126 SYMLINK libspdk_bdev_passthru.so 00:07:45.126 CC module/bdev/split/vbdev_split_rpc.o 00:07:45.126 CC module/bdev/raid/raid0.o 00:07:45.126 CC module/bdev/raid/raid1.o 00:07:45.126 LIB libspdk_bdev_malloc.a 00:07:45.126 CC module/bdev/ftl/bdev_ftl.o 00:07:45.126 SO libspdk_bdev_malloc.so.6.0 00:07:45.385 CC module/bdev/ftl/bdev_ftl_rpc.o 00:07:45.385 SYMLINK libspdk_bdev_malloc.so 00:07:45.385 CC module/bdev/nvme/bdev_nvme_rpc.o 00:07:45.385 LIB libspdk_bdev_split.a 00:07:45.385 SO libspdk_bdev_split.so.6.0 00:07:45.385 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:07:45.385 LIB libspdk_bdev_aio.a 00:07:45.643 SYMLINK libspdk_bdev_split.so 00:07:45.643 CC module/bdev/raid/concat.o 00:07:45.643 SO libspdk_bdev_aio.so.6.0 00:07:45.643 LIB libspdk_bdev_zone_block.a 00:07:45.643 CC module/bdev/raid/raid5f.o 00:07:45.643 LIB libspdk_bdev_ftl.a 00:07:45.643 SYMLINK libspdk_bdev_aio.so 00:07:45.643 SO libspdk_bdev_zone_block.so.6.0 00:07:45.643 CC module/bdev/nvme/nvme_rpc.o 00:07:45.643 SO libspdk_bdev_ftl.so.6.0 00:07:45.643 SYMLINK libspdk_bdev_zone_block.so 00:07:45.643 SYMLINK libspdk_bdev_ftl.so 00:07:45.643 CC module/bdev/nvme/bdev_mdns_client.o 00:07:45.643 CC module/bdev/nvme/vbdev_opal.o 00:07:45.643 CC module/bdev/iscsi/bdev_iscsi.o 00:07:45.902 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:07:45.902 CC module/bdev/virtio/bdev_virtio_scsi.o 00:07:45.902 CC module/bdev/nvme/vbdev_opal_rpc.o 00:07:45.902 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:07:45.902 CC module/bdev/virtio/bdev_virtio_blk.o 00:07:45.902 CC module/bdev/virtio/bdev_virtio_rpc.o 00:07:46.161 LIB libspdk_bdev_iscsi.a 00:07:46.161 SO libspdk_bdev_iscsi.so.6.0 00:07:46.161 LIB libspdk_bdev_raid.a 00:07:46.419 SYMLINK libspdk_bdev_iscsi.so 00:07:46.419 SO libspdk_bdev_raid.so.6.0 00:07:46.419 SYMLINK libspdk_bdev_raid.so 00:07:46.419 LIB libspdk_bdev_virtio.a 00:07:46.678 SO libspdk_bdev_virtio.so.6.0 00:07:46.678 SYMLINK libspdk_bdev_virtio.so 00:07:48.584 LIB libspdk_bdev_nvme.a 00:07:48.584 SO libspdk_bdev_nvme.so.7.1 00:07:48.584 SYMLINK libspdk_bdev_nvme.so 00:07:48.845 CC module/event/subsystems/scheduler/scheduler.o 00:07:48.845 CC module/event/subsystems/keyring/keyring.o 00:07:48.845 CC module/event/subsystems/iobuf/iobuf.o 00:07:48.845 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:07:48.845 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:07:48.846 CC module/event/subsystems/fsdev/fsdev.o 00:07:48.846 CC module/event/subsystems/vmd/vmd.o 00:07:48.846 CC module/event/subsystems/vmd/vmd_rpc.o 00:07:48.846 CC module/event/subsystems/sock/sock.o 00:07:49.104 LIB libspdk_event_sock.a 00:07:49.104 LIB libspdk_event_vhost_blk.a 00:07:49.104 LIB libspdk_event_scheduler.a 00:07:49.104 LIB libspdk_event_vmd.a 00:07:49.104 LIB libspdk_event_fsdev.a 00:07:49.104 SO libspdk_event_sock.so.5.0 00:07:49.104 LIB libspdk_event_keyring.a 00:07:49.104 SO libspdk_event_vhost_blk.so.3.0 00:07:49.104 SO libspdk_event_vmd.so.6.0 00:07:49.104 SO libspdk_event_scheduler.so.4.0 00:07:49.104 LIB libspdk_event_iobuf.a 00:07:49.104 SO libspdk_event_fsdev.so.1.0 00:07:49.104 SO libspdk_event_keyring.so.1.0 00:07:49.104 SO libspdk_event_iobuf.so.3.0 00:07:49.104 SYMLINK libspdk_event_sock.so 00:07:49.104 SYMLINK libspdk_event_vhost_blk.so 00:07:49.104 SYMLINK libspdk_event_vmd.so 00:07:49.104 SYMLINK libspdk_event_fsdev.so 00:07:49.104 SYMLINK libspdk_event_scheduler.so 00:07:49.104 SYMLINK libspdk_event_keyring.so 00:07:49.363 SYMLINK libspdk_event_iobuf.so 00:07:49.621 CC module/event/subsystems/accel/accel.o 00:07:49.621 LIB libspdk_event_accel.a 00:07:49.621 SO libspdk_event_accel.so.6.0 00:07:49.879 SYMLINK libspdk_event_accel.so 00:07:50.138 CC module/event/subsystems/bdev/bdev.o 00:07:50.397 LIB libspdk_event_bdev.a 00:07:50.397 SO libspdk_event_bdev.so.6.0 00:07:50.397 SYMLINK libspdk_event_bdev.so 00:07:50.655 CC module/event/subsystems/ublk/ublk.o 00:07:50.655 CC module/event/subsystems/nbd/nbd.o 00:07:50.655 CC module/event/subsystems/scsi/scsi.o 00:07:50.655 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:07:50.655 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:07:50.655 LIB libspdk_event_ublk.a 00:07:50.655 LIB libspdk_event_nbd.a 00:07:50.914 SO libspdk_event_ublk.so.3.0 00:07:50.914 SO libspdk_event_nbd.so.6.0 00:07:50.914 LIB libspdk_event_scsi.a 00:07:50.914 SO libspdk_event_scsi.so.6.0 00:07:50.914 SYMLINK libspdk_event_ublk.so 00:07:50.914 SYMLINK libspdk_event_nbd.so 00:07:50.914 SYMLINK libspdk_event_scsi.so 00:07:50.914 LIB libspdk_event_nvmf.a 00:07:50.914 SO libspdk_event_nvmf.so.6.0 00:07:51.173 SYMLINK libspdk_event_nvmf.so 00:07:51.173 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:07:51.173 CC module/event/subsystems/iscsi/iscsi.o 00:07:51.433 LIB libspdk_event_vhost_scsi.a 00:07:51.433 LIB libspdk_event_iscsi.a 00:07:51.433 SO libspdk_event_vhost_scsi.so.3.0 00:07:51.433 SO libspdk_event_iscsi.so.6.0 00:07:51.433 SYMLINK libspdk_event_vhost_scsi.so 00:07:51.433 SYMLINK libspdk_event_iscsi.so 00:07:51.692 SO libspdk.so.6.0 00:07:51.692 SYMLINK libspdk.so 00:07:51.952 CC test/rpc_client/rpc_client_test.o 00:07:51.952 TEST_HEADER include/spdk/accel.h 00:07:51.952 TEST_HEADER include/spdk/accel_module.h 00:07:51.952 TEST_HEADER include/spdk/assert.h 00:07:51.952 CXX app/trace/trace.o 00:07:51.952 TEST_HEADER include/spdk/barrier.h 00:07:51.952 TEST_HEADER include/spdk/base64.h 00:07:51.952 TEST_HEADER include/spdk/bdev.h 00:07:51.952 TEST_HEADER include/spdk/bdev_module.h 00:07:51.952 TEST_HEADER include/spdk/bdev_zone.h 00:07:51.952 TEST_HEADER include/spdk/bit_array.h 00:07:51.952 TEST_HEADER include/spdk/bit_pool.h 00:07:51.952 TEST_HEADER include/spdk/blob_bdev.h 00:07:51.952 TEST_HEADER include/spdk/blobfs_bdev.h 00:07:51.952 TEST_HEADER include/spdk/blobfs.h 00:07:51.952 TEST_HEADER include/spdk/blob.h 00:07:51.952 TEST_HEADER include/spdk/conf.h 00:07:51.952 TEST_HEADER include/spdk/config.h 00:07:51.952 CC examples/interrupt_tgt/interrupt_tgt.o 00:07:51.952 TEST_HEADER include/spdk/cpuset.h 00:07:51.952 TEST_HEADER include/spdk/crc16.h 00:07:51.952 TEST_HEADER include/spdk/crc32.h 00:07:51.952 TEST_HEADER include/spdk/crc64.h 00:07:51.952 TEST_HEADER include/spdk/dif.h 00:07:51.952 TEST_HEADER include/spdk/dma.h 00:07:51.952 TEST_HEADER include/spdk/endian.h 00:07:51.952 TEST_HEADER include/spdk/env_dpdk.h 00:07:51.952 TEST_HEADER include/spdk/env.h 00:07:51.952 TEST_HEADER include/spdk/event.h 00:07:51.952 TEST_HEADER include/spdk/fd_group.h 00:07:51.952 TEST_HEADER include/spdk/fd.h 00:07:51.952 TEST_HEADER include/spdk/file.h 00:07:51.952 TEST_HEADER include/spdk/fsdev.h 00:07:51.952 TEST_HEADER include/spdk/fsdev_module.h 00:07:51.952 TEST_HEADER include/spdk/ftl.h 00:07:51.952 TEST_HEADER include/spdk/fuse_dispatcher.h 00:07:51.952 CC examples/util/zipf/zipf.o 00:07:51.952 TEST_HEADER include/spdk/gpt_spec.h 00:07:51.952 CC test/thread/poller_perf/poller_perf.o 00:07:51.952 TEST_HEADER include/spdk/hexlify.h 00:07:51.952 TEST_HEADER include/spdk/histogram_data.h 00:07:51.952 TEST_HEADER include/spdk/idxd.h 00:07:51.952 TEST_HEADER include/spdk/idxd_spec.h 00:07:51.952 TEST_HEADER include/spdk/init.h 00:07:51.952 CC examples/ioat/perf/perf.o 00:07:51.952 TEST_HEADER include/spdk/ioat.h 00:07:51.952 TEST_HEADER include/spdk/ioat_spec.h 00:07:51.952 TEST_HEADER include/spdk/iscsi_spec.h 00:07:51.952 TEST_HEADER include/spdk/json.h 00:07:51.952 TEST_HEADER include/spdk/jsonrpc.h 00:07:51.952 TEST_HEADER include/spdk/keyring.h 00:07:51.952 TEST_HEADER include/spdk/keyring_module.h 00:07:51.952 TEST_HEADER include/spdk/likely.h 00:07:51.952 TEST_HEADER include/spdk/log.h 00:07:51.952 TEST_HEADER include/spdk/lvol.h 00:07:51.952 TEST_HEADER include/spdk/md5.h 00:07:51.952 TEST_HEADER include/spdk/memory.h 00:07:51.952 TEST_HEADER include/spdk/mmio.h 00:07:51.952 TEST_HEADER include/spdk/nbd.h 00:07:51.952 CC test/app/bdev_svc/bdev_svc.o 00:07:51.952 TEST_HEADER include/spdk/net.h 00:07:51.952 TEST_HEADER include/spdk/notify.h 00:07:51.952 TEST_HEADER include/spdk/nvme.h 00:07:51.952 CC test/dma/test_dma/test_dma.o 00:07:51.952 TEST_HEADER include/spdk/nvme_intel.h 00:07:51.952 TEST_HEADER include/spdk/nvme_ocssd.h 00:07:51.952 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:07:51.952 TEST_HEADER include/spdk/nvme_spec.h 00:07:51.952 CC test/env/mem_callbacks/mem_callbacks.o 00:07:51.952 TEST_HEADER include/spdk/nvme_zns.h 00:07:51.952 TEST_HEADER include/spdk/nvmf_cmd.h 00:07:51.952 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:07:51.952 TEST_HEADER include/spdk/nvmf.h 00:07:51.952 TEST_HEADER include/spdk/nvmf_spec.h 00:07:51.952 TEST_HEADER include/spdk/nvmf_transport.h 00:07:51.952 TEST_HEADER include/spdk/opal.h 00:07:51.952 TEST_HEADER include/spdk/opal_spec.h 00:07:51.952 TEST_HEADER include/spdk/pci_ids.h 00:07:51.952 TEST_HEADER include/spdk/pipe.h 00:07:51.952 TEST_HEADER include/spdk/queue.h 00:07:52.210 TEST_HEADER include/spdk/reduce.h 00:07:52.210 TEST_HEADER include/spdk/rpc.h 00:07:52.210 TEST_HEADER include/spdk/scheduler.h 00:07:52.210 TEST_HEADER include/spdk/scsi.h 00:07:52.210 TEST_HEADER include/spdk/scsi_spec.h 00:07:52.211 TEST_HEADER include/spdk/sock.h 00:07:52.211 TEST_HEADER include/spdk/stdinc.h 00:07:52.211 TEST_HEADER include/spdk/string.h 00:07:52.211 TEST_HEADER include/spdk/thread.h 00:07:52.211 TEST_HEADER include/spdk/trace.h 00:07:52.211 TEST_HEADER include/spdk/trace_parser.h 00:07:52.211 TEST_HEADER include/spdk/tree.h 00:07:52.211 TEST_HEADER include/spdk/ublk.h 00:07:52.211 TEST_HEADER include/spdk/util.h 00:07:52.211 TEST_HEADER include/spdk/uuid.h 00:07:52.211 TEST_HEADER include/spdk/version.h 00:07:52.211 TEST_HEADER include/spdk/vfio_user_pci.h 00:07:52.211 TEST_HEADER include/spdk/vfio_user_spec.h 00:07:52.211 TEST_HEADER include/spdk/vhost.h 00:07:52.211 TEST_HEADER include/spdk/vmd.h 00:07:52.211 TEST_HEADER include/spdk/xor.h 00:07:52.211 TEST_HEADER include/spdk/zipf.h 00:07:52.211 CXX test/cpp_headers/accel.o 00:07:52.211 LINK zipf 00:07:52.211 LINK interrupt_tgt 00:07:52.211 LINK rpc_client_test 00:07:52.211 LINK poller_perf 00:07:52.211 LINK bdev_svc 00:07:52.211 LINK ioat_perf 00:07:52.211 CXX test/cpp_headers/accel_module.o 00:07:52.469 LINK spdk_trace 00:07:52.469 CC app/trace_record/trace_record.o 00:07:52.469 CC test/env/vtophys/vtophys.o 00:07:52.469 CC app/nvmf_tgt/nvmf_main.o 00:07:52.469 CXX test/cpp_headers/assert.o 00:07:52.469 CC app/iscsi_tgt/iscsi_tgt.o 00:07:52.469 CC examples/ioat/verify/verify.o 00:07:52.727 LINK vtophys 00:07:52.727 LINK test_dma 00:07:52.727 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:07:52.727 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:07:52.727 CXX test/cpp_headers/barrier.o 00:07:52.727 LINK nvmf_tgt 00:07:52.727 LINK iscsi_tgt 00:07:52.727 LINK spdk_trace_record 00:07:52.727 LINK mem_callbacks 00:07:52.727 LINK verify 00:07:52.986 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:07:52.986 CXX test/cpp_headers/base64.o 00:07:52.986 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:07:52.986 CXX test/cpp_headers/bdev.o 00:07:52.986 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:07:52.986 CC test/env/memory/memory_ut.o 00:07:52.986 CXX test/cpp_headers/bdev_module.o 00:07:52.986 CC test/env/pci/pci_ut.o 00:07:52.986 CC app/spdk_tgt/spdk_tgt.o 00:07:53.247 CC app/spdk_lspci/spdk_lspci.o 00:07:53.247 CC examples/thread/thread/thread_ex.o 00:07:53.247 LINK nvme_fuzz 00:07:53.247 LINK env_dpdk_post_init 00:07:53.247 LINK spdk_lspci 00:07:53.247 CXX test/cpp_headers/bdev_zone.o 00:07:53.247 LINK spdk_tgt 00:07:53.505 LINK vhost_fuzz 00:07:53.505 LINK thread 00:07:53.505 CC examples/sock/hello_world/hello_sock.o 00:07:53.505 CXX test/cpp_headers/bit_array.o 00:07:53.505 CC app/spdk_nvme_perf/perf.o 00:07:53.505 CC examples/vmd/lsvmd/lsvmd.o 00:07:53.505 LINK pci_ut 00:07:53.765 CC test/app/histogram_perf/histogram_perf.o 00:07:53.765 CC examples/vmd/led/led.o 00:07:53.765 LINK lsvmd 00:07:53.765 CXX test/cpp_headers/bit_pool.o 00:07:53.765 CC test/app/jsoncat/jsoncat.o 00:07:53.765 LINK histogram_perf 00:07:53.765 LINK led 00:07:53.765 LINK hello_sock 00:07:53.765 CXX test/cpp_headers/blob_bdev.o 00:07:54.024 LINK jsoncat 00:07:54.024 CXX test/cpp_headers/blobfs_bdev.o 00:07:54.024 CXX test/cpp_headers/blobfs.o 00:07:54.024 CC app/spdk_nvme_identify/identify.o 00:07:54.024 CXX test/cpp_headers/blob.o 00:07:54.024 CC test/app/stub/stub.o 00:07:54.283 CC app/spdk_nvme_discover/discovery_aer.o 00:07:54.283 CXX test/cpp_headers/conf.o 00:07:54.283 CC examples/idxd/perf/perf.o 00:07:54.283 CC app/spdk_top/spdk_top.o 00:07:54.283 CC app/vhost/vhost.o 00:07:54.283 LINK stub 00:07:54.283 CXX test/cpp_headers/config.o 00:07:54.283 CXX test/cpp_headers/cpuset.o 00:07:54.542 LINK spdk_nvme_discover 00:07:54.542 LINK memory_ut 00:07:54.542 LINK vhost 00:07:54.542 CXX test/cpp_headers/crc16.o 00:07:54.542 LINK idxd_perf 00:07:54.542 LINK spdk_nvme_perf 00:07:54.542 CC app/spdk_dd/spdk_dd.o 00:07:54.801 CXX test/cpp_headers/crc32.o 00:07:54.801 CC app/fio/nvme/fio_plugin.o 00:07:54.801 CC test/event/event_perf/event_perf.o 00:07:54.801 CC app/fio/bdev/fio_plugin.o 00:07:54.801 LINK iscsi_fuzz 00:07:54.801 CC test/event/reactor/reactor.o 00:07:55.060 CXX test/cpp_headers/crc64.o 00:07:55.060 CC examples/fsdev/hello_world/hello_fsdev.o 00:07:55.060 LINK event_perf 00:07:55.060 LINK spdk_nvme_identify 00:07:55.060 LINK reactor 00:07:55.060 LINK spdk_dd 00:07:55.061 CXX test/cpp_headers/dif.o 00:07:55.319 CC test/event/reactor_perf/reactor_perf.o 00:07:55.319 CC test/nvme/aer/aer.o 00:07:55.319 CC test/nvme/reset/reset.o 00:07:55.319 CXX test/cpp_headers/dma.o 00:07:55.319 LINK hello_fsdev 00:07:55.319 CC test/nvme/sgl/sgl.o 00:07:55.319 CC test/nvme/e2edp/nvme_dp.o 00:07:55.319 LINK reactor_perf 00:07:55.319 LINK spdk_top 00:07:55.579 LINK spdk_bdev 00:07:55.579 LINK spdk_nvme 00:07:55.579 CXX test/cpp_headers/endian.o 00:07:55.579 CXX test/cpp_headers/env_dpdk.o 00:07:55.579 LINK reset 00:07:55.579 LINK sgl 00:07:55.579 LINK aer 00:07:55.838 CC test/event/app_repeat/app_repeat.o 00:07:55.838 CC test/nvme/overhead/overhead.o 00:07:55.838 CC test/nvme/err_injection/err_injection.o 00:07:55.838 LINK nvme_dp 00:07:55.838 CXX test/cpp_headers/env.o 00:07:55.838 CC examples/accel/perf/accel_perf.o 00:07:55.838 CXX test/cpp_headers/event.o 00:07:55.838 CXX test/cpp_headers/fd_group.o 00:07:55.838 CC test/nvme/startup/startup.o 00:07:55.838 LINK app_repeat 00:07:55.838 CC test/nvme/reserve/reserve.o 00:07:55.838 LINK err_injection 00:07:56.097 CXX test/cpp_headers/fd.o 00:07:56.097 CC test/nvme/simple_copy/simple_copy.o 00:07:56.097 LINK overhead 00:07:56.097 CC test/nvme/connect_stress/connect_stress.o 00:07:56.097 LINK startup 00:07:56.097 CC test/nvme/boot_partition/boot_partition.o 00:07:56.097 LINK reserve 00:07:56.097 CC test/event/scheduler/scheduler.o 00:07:56.356 CC test/nvme/compliance/nvme_compliance.o 00:07:56.356 CXX test/cpp_headers/file.o 00:07:56.356 LINK connect_stress 00:07:56.356 LINK boot_partition 00:07:56.356 CC test/nvme/fused_ordering/fused_ordering.o 00:07:56.356 CC test/nvme/doorbell_aers/doorbell_aers.o 00:07:56.356 LINK simple_copy 00:07:56.356 CXX test/cpp_headers/fsdev.o 00:07:56.356 CC test/nvme/fdp/fdp.o 00:07:56.614 LINK accel_perf 00:07:56.614 CXX test/cpp_headers/fsdev_module.o 00:07:56.614 LINK scheduler 00:07:56.614 CC test/nvme/cuse/cuse.o 00:07:56.614 LINK fused_ordering 00:07:56.614 LINK doorbell_aers 00:07:56.614 LINK nvme_compliance 00:07:56.614 CXX test/cpp_headers/ftl.o 00:07:56.873 CC test/accel/dif/dif.o 00:07:56.873 CXX test/cpp_headers/fuse_dispatcher.o 00:07:56.873 CXX test/cpp_headers/gpt_spec.o 00:07:56.873 CC examples/blob/hello_world/hello_blob.o 00:07:56.873 LINK fdp 00:07:56.873 CC test/blobfs/mkfs/mkfs.o 00:07:56.873 CC examples/blob/cli/blobcli.o 00:07:56.873 CC test/lvol/esnap/esnap.o 00:07:57.131 CXX test/cpp_headers/hexlify.o 00:07:57.131 CC examples/nvme/hello_world/hello_world.o 00:07:57.131 LINK hello_blob 00:07:57.131 CC examples/nvme/nvme_manage/nvme_manage.o 00:07:57.131 CC examples/nvme/reconnect/reconnect.o 00:07:57.131 LINK mkfs 00:07:57.132 CXX test/cpp_headers/histogram_data.o 00:07:57.389 CXX test/cpp_headers/idxd.o 00:07:57.389 CXX test/cpp_headers/idxd_spec.o 00:07:57.389 LINK hello_world 00:07:57.389 CC examples/nvme/arbitration/arbitration.o 00:07:57.389 CXX test/cpp_headers/init.o 00:07:57.648 LINK reconnect 00:07:57.648 LINK blobcli 00:07:57.648 CC examples/nvme/hotplug/hotplug.o 00:07:57.648 CXX test/cpp_headers/ioat.o 00:07:57.648 LINK dif 00:07:57.907 CC examples/nvme/cmb_copy/cmb_copy.o 00:07:57.907 CXX test/cpp_headers/ioat_spec.o 00:07:57.907 CC examples/nvme/abort/abort.o 00:07:57.907 LINK nvme_manage 00:07:57.907 LINK hotplug 00:07:57.907 LINK arbitration 00:07:57.907 CXX test/cpp_headers/iscsi_spec.o 00:07:57.907 LINK cmb_copy 00:07:57.907 CC examples/bdev/hello_world/hello_bdev.o 00:07:58.174 CXX test/cpp_headers/json.o 00:07:58.174 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:07:58.174 CXX test/cpp_headers/jsonrpc.o 00:07:58.174 CC examples/bdev/bdevperf/bdevperf.o 00:07:58.174 CXX test/cpp_headers/keyring.o 00:07:58.174 LINK cuse 00:07:58.174 CXX test/cpp_headers/keyring_module.o 00:07:58.174 CC test/bdev/bdevio/bdevio.o 00:07:58.174 LINK abort 00:07:58.174 LINK hello_bdev 00:07:58.444 LINK pmr_persistence 00:07:58.444 CXX test/cpp_headers/likely.o 00:07:58.444 CXX test/cpp_headers/log.o 00:07:58.444 CXX test/cpp_headers/lvol.o 00:07:58.444 CXX test/cpp_headers/md5.o 00:07:58.444 CXX test/cpp_headers/memory.o 00:07:58.444 CXX test/cpp_headers/mmio.o 00:07:58.444 CXX test/cpp_headers/nbd.o 00:07:58.444 CXX test/cpp_headers/net.o 00:07:58.444 CXX test/cpp_headers/notify.o 00:07:58.444 CXX test/cpp_headers/nvme.o 00:07:58.704 CXX test/cpp_headers/nvme_intel.o 00:07:58.704 CXX test/cpp_headers/nvme_ocssd.o 00:07:58.704 CXX test/cpp_headers/nvme_ocssd_spec.o 00:07:58.704 CXX test/cpp_headers/nvme_spec.o 00:07:58.704 CXX test/cpp_headers/nvme_zns.o 00:07:58.704 CXX test/cpp_headers/nvmf_cmd.o 00:07:58.704 LINK bdevio 00:07:58.704 CXX test/cpp_headers/nvmf_fc_spec.o 00:07:58.704 CXX test/cpp_headers/nvmf.o 00:07:58.960 CXX test/cpp_headers/nvmf_spec.o 00:07:58.960 CXX test/cpp_headers/nvmf_transport.o 00:07:58.960 CXX test/cpp_headers/opal.o 00:07:58.960 CXX test/cpp_headers/opal_spec.o 00:07:58.960 CXX test/cpp_headers/pci_ids.o 00:07:58.960 CXX test/cpp_headers/pipe.o 00:07:58.960 CXX test/cpp_headers/queue.o 00:07:58.960 CXX test/cpp_headers/reduce.o 00:07:58.960 CXX test/cpp_headers/rpc.o 00:07:58.960 CXX test/cpp_headers/scheduler.o 00:07:59.219 CXX test/cpp_headers/scsi.o 00:07:59.219 CXX test/cpp_headers/scsi_spec.o 00:07:59.219 CXX test/cpp_headers/sock.o 00:07:59.219 CXX test/cpp_headers/stdinc.o 00:07:59.219 CXX test/cpp_headers/string.o 00:07:59.219 LINK bdevperf 00:07:59.219 CXX test/cpp_headers/thread.o 00:07:59.219 CXX test/cpp_headers/trace.o 00:07:59.219 CXX test/cpp_headers/trace_parser.o 00:07:59.219 CXX test/cpp_headers/tree.o 00:07:59.219 CXX test/cpp_headers/ublk.o 00:07:59.219 CXX test/cpp_headers/util.o 00:07:59.219 CXX test/cpp_headers/uuid.o 00:07:59.219 CXX test/cpp_headers/version.o 00:07:59.219 CXX test/cpp_headers/vfio_user_pci.o 00:07:59.219 CXX test/cpp_headers/vfio_user_spec.o 00:07:59.478 CXX test/cpp_headers/vhost.o 00:07:59.478 CXX test/cpp_headers/vmd.o 00:07:59.478 CXX test/cpp_headers/xor.o 00:07:59.478 CXX test/cpp_headers/zipf.o 00:07:59.736 CC examples/nvmf/nvmf/nvmf.o 00:07:59.995 LINK nvmf 00:08:04.182 LINK esnap 00:08:04.750 ************************************ 00:08:04.750 END TEST make 00:08:04.750 ************************************ 00:08:04.750 00:08:04.750 real 1m47.794s 00:08:04.750 user 10m3.920s 00:08:04.750 sys 1m54.806s 00:08:04.750 12:07:00 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:08:04.750 12:07:00 make -- common/autotest_common.sh@10 -- $ set +x 00:08:04.750 12:07:00 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:08:04.750 12:07:00 -- pm/common@29 -- $ signal_monitor_resources TERM 00:08:04.750 12:07:00 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:08:04.750 12:07:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:04.750 12:07:00 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:08:04.750 12:07:00 -- pm/common@44 -- $ pid=5252 00:08:04.750 12:07:00 -- pm/common@50 -- $ kill -TERM 5252 00:08:04.750 12:07:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:04.750 12:07:00 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:08:04.750 12:07:00 -- pm/common@44 -- $ pid=5254 00:08:04.750 12:07:00 -- pm/common@50 -- $ kill -TERM 5254 00:08:04.750 12:07:00 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:08:04.750 12:07:00 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:08:05.010 12:07:00 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:05.010 12:07:00 -- common/autotest_common.sh@1693 -- # lcov --version 00:08:05.010 12:07:00 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:05.010 12:07:00 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:05.010 12:07:00 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:05.010 12:07:00 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:05.010 12:07:00 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:05.010 12:07:00 -- scripts/common.sh@336 -- # IFS=.-: 00:08:05.010 12:07:00 -- scripts/common.sh@336 -- # read -ra ver1 00:08:05.010 12:07:00 -- scripts/common.sh@337 -- # IFS=.-: 00:08:05.010 12:07:00 -- scripts/common.sh@337 -- # read -ra ver2 00:08:05.010 12:07:00 -- scripts/common.sh@338 -- # local 'op=<' 00:08:05.011 12:07:00 -- scripts/common.sh@340 -- # ver1_l=2 00:08:05.011 12:07:00 -- scripts/common.sh@341 -- # ver2_l=1 00:08:05.011 12:07:00 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:05.011 12:07:00 -- scripts/common.sh@344 -- # case "$op" in 00:08:05.011 12:07:00 -- scripts/common.sh@345 -- # : 1 00:08:05.011 12:07:00 -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:05.011 12:07:00 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:05.011 12:07:00 -- scripts/common.sh@365 -- # decimal 1 00:08:05.011 12:07:00 -- scripts/common.sh@353 -- # local d=1 00:08:05.011 12:07:00 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:05.011 12:07:00 -- scripts/common.sh@355 -- # echo 1 00:08:05.011 12:07:00 -- scripts/common.sh@365 -- # ver1[v]=1 00:08:05.011 12:07:00 -- scripts/common.sh@366 -- # decimal 2 00:08:05.011 12:07:00 -- scripts/common.sh@353 -- # local d=2 00:08:05.011 12:07:00 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:05.011 12:07:00 -- scripts/common.sh@355 -- # echo 2 00:08:05.011 12:07:00 -- scripts/common.sh@366 -- # ver2[v]=2 00:08:05.011 12:07:00 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:05.011 12:07:00 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:05.011 12:07:00 -- scripts/common.sh@368 -- # return 0 00:08:05.011 12:07:00 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:05.011 12:07:00 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:05.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.011 --rc genhtml_branch_coverage=1 00:08:05.011 --rc genhtml_function_coverage=1 00:08:05.011 --rc genhtml_legend=1 00:08:05.011 --rc geninfo_all_blocks=1 00:08:05.011 --rc geninfo_unexecuted_blocks=1 00:08:05.011 00:08:05.011 ' 00:08:05.011 12:07:00 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:05.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.011 --rc genhtml_branch_coverage=1 00:08:05.011 --rc genhtml_function_coverage=1 00:08:05.011 --rc genhtml_legend=1 00:08:05.011 --rc geninfo_all_blocks=1 00:08:05.011 --rc geninfo_unexecuted_blocks=1 00:08:05.011 00:08:05.011 ' 00:08:05.011 12:07:00 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:05.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.011 --rc genhtml_branch_coverage=1 00:08:05.011 --rc genhtml_function_coverage=1 00:08:05.011 --rc genhtml_legend=1 00:08:05.011 --rc geninfo_all_blocks=1 00:08:05.011 --rc geninfo_unexecuted_blocks=1 00:08:05.011 00:08:05.011 ' 00:08:05.011 12:07:00 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:05.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.011 --rc genhtml_branch_coverage=1 00:08:05.011 --rc genhtml_function_coverage=1 00:08:05.011 --rc genhtml_legend=1 00:08:05.011 --rc geninfo_all_blocks=1 00:08:05.011 --rc geninfo_unexecuted_blocks=1 00:08:05.011 00:08:05.011 ' 00:08:05.011 12:07:00 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:05.011 12:07:00 -- nvmf/common.sh@7 -- # uname -s 00:08:05.011 12:07:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:05.011 12:07:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:05.011 12:07:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:05.011 12:07:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:05.011 12:07:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:05.011 12:07:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:05.011 12:07:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:05.011 12:07:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:05.011 12:07:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:05.011 12:07:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:05.011 12:07:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ef1d303-9415-4390-8cec-f584d6dbee6a 00:08:05.011 12:07:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ef1d303-9415-4390-8cec-f584d6dbee6a 00:08:05.011 12:07:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:05.011 12:07:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:05.011 12:07:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:05.011 12:07:00 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:05.011 12:07:00 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:05.011 12:07:00 -- scripts/common.sh@15 -- # shopt -s extglob 00:08:05.011 12:07:00 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:05.011 12:07:00 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:05.011 12:07:00 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:05.011 12:07:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.011 12:07:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.011 12:07:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.011 12:07:00 -- paths/export.sh@5 -- # export PATH 00:08:05.011 12:07:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.011 12:07:00 -- nvmf/common.sh@51 -- # : 0 00:08:05.011 12:07:00 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:05.011 12:07:00 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:05.011 12:07:00 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:05.011 12:07:00 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:05.011 12:07:00 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:05.011 12:07:00 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:05.011 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:05.011 12:07:00 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:05.011 12:07:00 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:05.011 12:07:00 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:05.011 12:07:00 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:08:05.011 12:07:00 -- spdk/autotest.sh@32 -- # uname -s 00:08:05.011 12:07:00 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:08:05.011 12:07:00 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:08:05.011 12:07:00 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:08:05.011 12:07:00 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:08:05.011 12:07:00 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:08:05.011 12:07:00 -- spdk/autotest.sh@44 -- # modprobe nbd 00:08:05.011 12:07:01 -- spdk/autotest.sh@46 -- # type -P udevadm 00:08:05.011 12:07:01 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:08:05.011 12:07:01 -- spdk/autotest.sh@48 -- # udevadm_pid=54406 00:08:05.011 12:07:01 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:08:05.011 12:07:01 -- pm/common@17 -- # local monitor 00:08:05.011 12:07:01 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:08:05.011 12:07:01 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:08:05.011 12:07:01 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:08:05.011 12:07:01 -- pm/common@25 -- # sleep 1 00:08:05.011 12:07:01 -- pm/common@21 -- # date +%s 00:08:05.011 12:07:01 -- pm/common@21 -- # date +%s 00:08:05.011 12:07:01 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732536421 00:08:05.011 12:07:01 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732536421 00:08:05.011 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732536421_collect-cpu-load.pm.log 00:08:05.011 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732536421_collect-vmstat.pm.log 00:08:05.947 12:07:02 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:08:05.947 12:07:02 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:08:05.947 12:07:02 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:05.947 12:07:02 -- common/autotest_common.sh@10 -- # set +x 00:08:05.947 12:07:02 -- spdk/autotest.sh@59 -- # create_test_list 00:08:05.947 12:07:02 -- common/autotest_common.sh@752 -- # xtrace_disable 00:08:05.947 12:07:02 -- common/autotest_common.sh@10 -- # set +x 00:08:06.205 12:07:02 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:08:06.205 12:07:02 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:08:06.205 12:07:02 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:08:06.205 12:07:02 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:08:06.205 12:07:02 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:08:06.205 12:07:02 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:08:06.205 12:07:02 -- common/autotest_common.sh@1457 -- # uname 00:08:06.205 12:07:02 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:08:06.205 12:07:02 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:08:06.205 12:07:02 -- common/autotest_common.sh@1477 -- # uname 00:08:06.206 12:07:02 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:08:06.206 12:07:02 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:08:06.206 12:07:02 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:08:06.206 lcov: LCOV version 1.15 00:08:06.206 12:07:02 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:08:24.323 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:08:24.323 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:08:39.199 12:07:35 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:08:39.199 12:07:35 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:39.199 12:07:35 -- common/autotest_common.sh@10 -- # set +x 00:08:39.199 12:07:35 -- spdk/autotest.sh@78 -- # rm -f 00:08:39.199 12:07:35 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:39.763 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:39.763 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:08:39.763 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:08:40.019 12:07:35 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:08:40.019 12:07:35 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:08:40.019 12:07:35 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:08:40.020 12:07:35 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:08:40.020 12:07:35 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:40.020 12:07:35 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:08:40.020 12:07:35 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:08:40.020 12:07:35 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:40.020 12:07:35 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:40.020 12:07:35 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:40.020 12:07:35 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:08:40.020 12:07:35 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:08:40.020 12:07:35 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:40.020 12:07:35 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:40.020 12:07:35 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:40.020 12:07:35 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:08:40.020 12:07:35 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:08:40.020 12:07:35 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:08:40.020 12:07:35 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:40.020 12:07:35 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:40.020 12:07:35 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:08:40.020 12:07:35 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:08:40.020 12:07:35 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:08:40.020 12:07:35 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:40.020 12:07:35 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:08:40.020 12:07:35 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:40.020 12:07:35 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:40.020 12:07:35 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:08:40.020 12:07:35 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:08:40.020 12:07:35 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:08:40.020 No valid GPT data, bailing 00:08:40.020 12:07:35 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:08:40.020 12:07:35 -- scripts/common.sh@394 -- # pt= 00:08:40.020 12:07:35 -- scripts/common.sh@395 -- # return 1 00:08:40.020 12:07:35 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:08:40.020 1+0 records in 00:08:40.020 1+0 records out 00:08:40.020 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00490434 s, 214 MB/s 00:08:40.020 12:07:35 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:40.020 12:07:35 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:40.020 12:07:35 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:08:40.020 12:07:35 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:08:40.020 12:07:35 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:08:40.020 No valid GPT data, bailing 00:08:40.020 12:07:36 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:08:40.020 12:07:36 -- scripts/common.sh@394 -- # pt= 00:08:40.020 12:07:36 -- scripts/common.sh@395 -- # return 1 00:08:40.020 12:07:36 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:08:40.020 1+0 records in 00:08:40.020 1+0 records out 00:08:40.020 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00471768 s, 222 MB/s 00:08:40.020 12:07:36 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:40.020 12:07:36 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:40.020 12:07:36 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:08:40.020 12:07:36 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:08:40.020 12:07:36 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:08:40.020 No valid GPT data, bailing 00:08:40.020 12:07:36 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:08:40.020 12:07:36 -- scripts/common.sh@394 -- # pt= 00:08:40.020 12:07:36 -- scripts/common.sh@395 -- # return 1 00:08:40.020 12:07:36 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:08:40.020 1+0 records in 00:08:40.020 1+0 records out 00:08:40.020 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00416811 s, 252 MB/s 00:08:40.020 12:07:36 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:40.020 12:07:36 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:40.020 12:07:36 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:08:40.020 12:07:36 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:08:40.020 12:07:36 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:08:40.303 No valid GPT data, bailing 00:08:40.303 12:07:36 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:08:40.303 12:07:36 -- scripts/common.sh@394 -- # pt= 00:08:40.303 12:07:36 -- scripts/common.sh@395 -- # return 1 00:08:40.303 12:07:36 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:08:40.303 1+0 records in 00:08:40.303 1+0 records out 00:08:40.303 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00451388 s, 232 MB/s 00:08:40.303 12:07:36 -- spdk/autotest.sh@105 -- # sync 00:08:40.303 12:07:36 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:08:40.303 12:07:36 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:08:40.303 12:07:36 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:08:42.199 12:07:38 -- spdk/autotest.sh@111 -- # uname -s 00:08:42.199 12:07:38 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:08:42.199 12:07:38 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:08:42.199 12:07:38 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:08:43.132 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:43.132 Hugepages 00:08:43.132 node hugesize free / total 00:08:43.132 node0 1048576kB 0 / 0 00:08:43.132 node0 2048kB 0 / 0 00:08:43.132 00:08:43.132 Type BDF Vendor Device NUMA Driver Device Block devices 00:08:43.132 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:08:43.132 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:08:43.132 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:08:43.132 12:07:39 -- spdk/autotest.sh@117 -- # uname -s 00:08:43.132 12:07:39 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:08:43.132 12:07:39 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:08:43.132 12:07:39 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:43.697 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:43.955 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:43.955 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:43.955 12:07:40 -- common/autotest_common.sh@1517 -- # sleep 1 00:08:45.329 12:07:41 -- common/autotest_common.sh@1518 -- # bdfs=() 00:08:45.329 12:07:41 -- common/autotest_common.sh@1518 -- # local bdfs 00:08:45.329 12:07:41 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:08:45.329 12:07:41 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:08:45.329 12:07:41 -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:45.329 12:07:41 -- common/autotest_common.sh@1498 -- # local bdfs 00:08:45.329 12:07:41 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:45.329 12:07:41 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:45.329 12:07:41 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:45.329 12:07:41 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:08:45.329 12:07:41 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:08:45.329 12:07:41 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:45.329 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:45.587 Waiting for block devices as requested 00:08:45.587 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:45.587 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:45.587 12:07:41 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:08:45.587 12:07:41 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:08:45.587 12:07:41 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:08:45.587 12:07:41 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:08:45.587 12:07:41 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:08:45.587 12:07:41 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:08:45.846 12:07:41 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:08:45.846 12:07:41 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:08:45.846 12:07:41 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:08:45.846 12:07:41 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:08:45.846 12:07:41 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:08:45.846 12:07:41 -- common/autotest_common.sh@1531 -- # grep oacs 00:08:45.846 12:07:41 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:08:45.846 12:07:41 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:08:45.846 12:07:41 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:08:45.846 12:07:41 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:08:45.846 12:07:41 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:08:45.846 12:07:41 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:08:45.846 12:07:41 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:08:45.846 12:07:41 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:08:45.846 12:07:41 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:08:45.846 12:07:41 -- common/autotest_common.sh@1543 -- # continue 00:08:45.846 12:07:41 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:08:45.846 12:07:41 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:08:45.846 12:07:41 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:08:45.846 12:07:41 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:08:45.846 12:07:41 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:08:45.846 12:07:41 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:08:45.846 12:07:41 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:08:45.846 12:07:41 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:08:45.846 12:07:41 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:08:45.846 12:07:41 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:08:45.846 12:07:41 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:08:45.846 12:07:41 -- common/autotest_common.sh@1531 -- # grep oacs 00:08:45.846 12:07:41 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:08:45.846 12:07:41 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:08:45.846 12:07:41 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:08:45.846 12:07:41 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:08:45.846 12:07:41 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:08:45.846 12:07:41 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:08:45.846 12:07:41 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:08:45.846 12:07:41 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:08:45.846 12:07:41 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:08:45.846 12:07:41 -- common/autotest_common.sh@1543 -- # continue 00:08:45.846 12:07:41 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:08:45.846 12:07:41 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:45.846 12:07:41 -- common/autotest_common.sh@10 -- # set +x 00:08:45.846 12:07:41 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:08:45.846 12:07:41 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:45.846 12:07:41 -- common/autotest_common.sh@10 -- # set +x 00:08:45.846 12:07:41 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:46.413 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:46.672 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:46.672 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:46.672 12:07:42 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:08:46.672 12:07:42 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:46.672 12:07:42 -- common/autotest_common.sh@10 -- # set +x 00:08:46.672 12:07:42 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:08:46.672 12:07:42 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:08:46.672 12:07:42 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:08:46.672 12:07:42 -- common/autotest_common.sh@1563 -- # bdfs=() 00:08:46.672 12:07:42 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:08:46.672 12:07:42 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:08:46.672 12:07:42 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:08:46.672 12:07:42 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:08:46.672 12:07:42 -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:46.672 12:07:42 -- common/autotest_common.sh@1498 -- # local bdfs 00:08:46.672 12:07:42 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:46.672 12:07:42 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:46.672 12:07:42 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:46.672 12:07:42 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:08:46.672 12:07:42 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:08:46.672 12:07:42 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:08:46.672 12:07:42 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:08:46.672 12:07:42 -- common/autotest_common.sh@1566 -- # device=0x0010 00:08:46.672 12:07:42 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:08:46.672 12:07:42 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:08:46.672 12:07:42 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:08:46.672 12:07:42 -- common/autotest_common.sh@1566 -- # device=0x0010 00:08:46.672 12:07:42 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:08:46.672 12:07:42 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:08:46.672 12:07:42 -- common/autotest_common.sh@1572 -- # return 0 00:08:46.672 12:07:42 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:08:46.672 12:07:42 -- common/autotest_common.sh@1580 -- # return 0 00:08:46.672 12:07:42 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:08:46.672 12:07:42 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:08:46.672 12:07:42 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:08:46.672 12:07:42 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:08:46.672 12:07:42 -- spdk/autotest.sh@149 -- # timing_enter lib 00:08:46.672 12:07:42 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:46.672 12:07:42 -- common/autotest_common.sh@10 -- # set +x 00:08:46.931 12:07:42 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:08:46.931 12:07:42 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:08:46.931 12:07:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:46.931 12:07:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:46.931 12:07:42 -- common/autotest_common.sh@10 -- # set +x 00:08:46.931 ************************************ 00:08:46.931 START TEST env 00:08:46.931 ************************************ 00:08:46.931 12:07:42 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:08:46.931 * Looking for test storage... 00:08:46.931 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:08:46.931 12:07:42 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:46.931 12:07:42 env -- common/autotest_common.sh@1693 -- # lcov --version 00:08:46.931 12:07:42 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:46.931 12:07:42 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:46.931 12:07:42 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:46.931 12:07:42 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:46.931 12:07:42 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:46.931 12:07:42 env -- scripts/common.sh@336 -- # IFS=.-: 00:08:46.931 12:07:42 env -- scripts/common.sh@336 -- # read -ra ver1 00:08:46.931 12:07:42 env -- scripts/common.sh@337 -- # IFS=.-: 00:08:46.931 12:07:42 env -- scripts/common.sh@337 -- # read -ra ver2 00:08:46.931 12:07:42 env -- scripts/common.sh@338 -- # local 'op=<' 00:08:46.931 12:07:42 env -- scripts/common.sh@340 -- # ver1_l=2 00:08:46.931 12:07:42 env -- scripts/common.sh@341 -- # ver2_l=1 00:08:46.931 12:07:42 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:46.931 12:07:42 env -- scripts/common.sh@344 -- # case "$op" in 00:08:46.932 12:07:42 env -- scripts/common.sh@345 -- # : 1 00:08:46.932 12:07:42 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:46.932 12:07:42 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:46.932 12:07:42 env -- scripts/common.sh@365 -- # decimal 1 00:08:46.932 12:07:42 env -- scripts/common.sh@353 -- # local d=1 00:08:46.932 12:07:42 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:46.932 12:07:42 env -- scripts/common.sh@355 -- # echo 1 00:08:46.932 12:07:42 env -- scripts/common.sh@365 -- # ver1[v]=1 00:08:46.932 12:07:42 env -- scripts/common.sh@366 -- # decimal 2 00:08:46.932 12:07:42 env -- scripts/common.sh@353 -- # local d=2 00:08:46.932 12:07:42 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:46.932 12:07:42 env -- scripts/common.sh@355 -- # echo 2 00:08:46.932 12:07:42 env -- scripts/common.sh@366 -- # ver2[v]=2 00:08:46.932 12:07:42 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:46.932 12:07:42 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:46.932 12:07:42 env -- scripts/common.sh@368 -- # return 0 00:08:46.932 12:07:42 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:46.932 12:07:42 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:46.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.932 --rc genhtml_branch_coverage=1 00:08:46.932 --rc genhtml_function_coverage=1 00:08:46.932 --rc genhtml_legend=1 00:08:46.932 --rc geninfo_all_blocks=1 00:08:46.932 --rc geninfo_unexecuted_blocks=1 00:08:46.932 00:08:46.932 ' 00:08:46.932 12:07:42 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:46.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.932 --rc genhtml_branch_coverage=1 00:08:46.932 --rc genhtml_function_coverage=1 00:08:46.932 --rc genhtml_legend=1 00:08:46.932 --rc geninfo_all_blocks=1 00:08:46.932 --rc geninfo_unexecuted_blocks=1 00:08:46.932 00:08:46.932 ' 00:08:46.932 12:07:42 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:46.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.932 --rc genhtml_branch_coverage=1 00:08:46.932 --rc genhtml_function_coverage=1 00:08:46.932 --rc genhtml_legend=1 00:08:46.932 --rc geninfo_all_blocks=1 00:08:46.932 --rc geninfo_unexecuted_blocks=1 00:08:46.932 00:08:46.932 ' 00:08:46.932 12:07:42 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:46.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.932 --rc genhtml_branch_coverage=1 00:08:46.932 --rc genhtml_function_coverage=1 00:08:46.932 --rc genhtml_legend=1 00:08:46.932 --rc geninfo_all_blocks=1 00:08:46.932 --rc geninfo_unexecuted_blocks=1 00:08:46.932 00:08:46.932 ' 00:08:46.932 12:07:42 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:08:46.932 12:07:42 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:46.932 12:07:42 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:46.932 12:07:42 env -- common/autotest_common.sh@10 -- # set +x 00:08:46.932 ************************************ 00:08:46.932 START TEST env_memory 00:08:46.932 ************************************ 00:08:46.932 12:07:42 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:08:46.932 00:08:46.932 00:08:46.932 CUnit - A unit testing framework for C - Version 2.1-3 00:08:46.932 http://cunit.sourceforge.net/ 00:08:46.932 00:08:46.932 00:08:46.932 Suite: memory 00:08:47.191 Test: alloc and free memory map ...[2024-11-25 12:07:43.054419] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:08:47.191 passed 00:08:47.191 Test: mem map translation ...[2024-11-25 12:07:43.115428] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:08:47.191 [2024-11-25 12:07:43.115529] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:08:47.191 [2024-11-25 12:07:43.115630] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:08:47.191 [2024-11-25 12:07:43.115658] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:08:47.191 passed 00:08:47.191 Test: mem map registration ...[2024-11-25 12:07:43.214082] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:08:47.191 [2024-11-25 12:07:43.214191] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:08:47.191 passed 00:08:47.450 Test: mem map adjacent registrations ...passed 00:08:47.450 00:08:47.450 Run Summary: Type Total Ran Passed Failed Inactive 00:08:47.450 suites 1 1 n/a 0 0 00:08:47.450 tests 4 4 4 0 0 00:08:47.450 asserts 152 152 152 0 n/a 00:08:47.450 00:08:47.450 Elapsed time = 0.338 seconds 00:08:47.450 00:08:47.450 real 0m0.377s 00:08:47.450 user 0m0.346s 00:08:47.450 sys 0m0.024s 00:08:47.450 12:07:43 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:47.450 ************************************ 00:08:47.450 END TEST env_memory 00:08:47.450 ************************************ 00:08:47.450 12:07:43 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:08:47.450 12:07:43 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:08:47.450 12:07:43 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:47.450 12:07:43 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:47.450 12:07:43 env -- common/autotest_common.sh@10 -- # set +x 00:08:47.450 ************************************ 00:08:47.450 START TEST env_vtophys 00:08:47.450 ************************************ 00:08:47.450 12:07:43 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:08:47.450 EAL: lib.eal log level changed from notice to debug 00:08:47.450 EAL: Detected lcore 0 as core 0 on socket 0 00:08:47.450 EAL: Detected lcore 1 as core 0 on socket 0 00:08:47.450 EAL: Detected lcore 2 as core 0 on socket 0 00:08:47.450 EAL: Detected lcore 3 as core 0 on socket 0 00:08:47.450 EAL: Detected lcore 4 as core 0 on socket 0 00:08:47.450 EAL: Detected lcore 5 as core 0 on socket 0 00:08:47.450 EAL: Detected lcore 6 as core 0 on socket 0 00:08:47.450 EAL: Detected lcore 7 as core 0 on socket 0 00:08:47.450 EAL: Detected lcore 8 as core 0 on socket 0 00:08:47.450 EAL: Detected lcore 9 as core 0 on socket 0 00:08:47.450 EAL: Maximum logical cores by configuration: 128 00:08:47.450 EAL: Detected CPU lcores: 10 00:08:47.450 EAL: Detected NUMA nodes: 1 00:08:47.450 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:08:47.450 EAL: Detected shared linkage of DPDK 00:08:47.450 EAL: No shared files mode enabled, IPC will be disabled 00:08:47.450 EAL: Selected IOVA mode 'PA' 00:08:47.450 EAL: Probing VFIO support... 00:08:47.450 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:08:47.450 EAL: VFIO modules not loaded, skipping VFIO support... 00:08:47.450 EAL: Ask a virtual area of 0x2e000 bytes 00:08:47.450 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:08:47.450 EAL: Setting up physically contiguous memory... 00:08:47.450 EAL: Setting maximum number of open files to 524288 00:08:47.450 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:08:47.450 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:08:47.450 EAL: Ask a virtual area of 0x61000 bytes 00:08:47.450 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:08:47.450 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:47.450 EAL: Ask a virtual area of 0x400000000 bytes 00:08:47.450 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:08:47.450 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:08:47.450 EAL: Ask a virtual area of 0x61000 bytes 00:08:47.450 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:08:47.450 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:47.450 EAL: Ask a virtual area of 0x400000000 bytes 00:08:47.450 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:08:47.450 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:08:47.450 EAL: Ask a virtual area of 0x61000 bytes 00:08:47.450 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:08:47.450 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:47.450 EAL: Ask a virtual area of 0x400000000 bytes 00:08:47.450 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:08:47.450 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:08:47.450 EAL: Ask a virtual area of 0x61000 bytes 00:08:47.450 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:08:47.450 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:47.450 EAL: Ask a virtual area of 0x400000000 bytes 00:08:47.450 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:08:47.450 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:08:47.450 EAL: Hugepages will be freed exactly as allocated. 00:08:47.450 EAL: No shared files mode enabled, IPC is disabled 00:08:47.450 EAL: No shared files mode enabled, IPC is disabled 00:08:47.709 EAL: TSC frequency is ~2200000 KHz 00:08:47.709 EAL: Main lcore 0 is ready (tid=7f3bbd6a3a40;cpuset=[0]) 00:08:47.709 EAL: Trying to obtain current memory policy. 00:08:47.709 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:47.709 EAL: Restoring previous memory policy: 0 00:08:47.709 EAL: request: mp_malloc_sync 00:08:47.709 EAL: No shared files mode enabled, IPC is disabled 00:08:47.709 EAL: Heap on socket 0 was expanded by 2MB 00:08:47.709 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:08:47.709 EAL: No PCI address specified using 'addr=' in: bus=pci 00:08:47.709 EAL: Mem event callback 'spdk:(nil)' registered 00:08:47.709 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:08:47.709 00:08:47.709 00:08:47.709 CUnit - A unit testing framework for C - Version 2.1-3 00:08:47.709 http://cunit.sourceforge.net/ 00:08:47.709 00:08:47.709 00:08:47.709 Suite: components_suite 00:08:48.276 Test: vtophys_malloc_test ...passed 00:08:48.276 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:08:48.276 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:48.276 EAL: Restoring previous memory policy: 4 00:08:48.276 EAL: Calling mem event callback 'spdk:(nil)' 00:08:48.276 EAL: request: mp_malloc_sync 00:08:48.276 EAL: No shared files mode enabled, IPC is disabled 00:08:48.276 EAL: Heap on socket 0 was expanded by 4MB 00:08:48.276 EAL: Calling mem event callback 'spdk:(nil)' 00:08:48.276 EAL: request: mp_malloc_sync 00:08:48.276 EAL: No shared files mode enabled, IPC is disabled 00:08:48.276 EAL: Heap on socket 0 was shrunk by 4MB 00:08:48.276 EAL: Trying to obtain current memory policy. 00:08:48.276 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:48.276 EAL: Restoring previous memory policy: 4 00:08:48.276 EAL: Calling mem event callback 'spdk:(nil)' 00:08:48.276 EAL: request: mp_malloc_sync 00:08:48.276 EAL: No shared files mode enabled, IPC is disabled 00:08:48.276 EAL: Heap on socket 0 was expanded by 6MB 00:08:48.276 EAL: Calling mem event callback 'spdk:(nil)' 00:08:48.276 EAL: request: mp_malloc_sync 00:08:48.276 EAL: No shared files mode enabled, IPC is disabled 00:08:48.276 EAL: Heap on socket 0 was shrunk by 6MB 00:08:48.276 EAL: Trying to obtain current memory policy. 00:08:48.276 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:48.276 EAL: Restoring previous memory policy: 4 00:08:48.276 EAL: Calling mem event callback 'spdk:(nil)' 00:08:48.276 EAL: request: mp_malloc_sync 00:08:48.276 EAL: No shared files mode enabled, IPC is disabled 00:08:48.276 EAL: Heap on socket 0 was expanded by 10MB 00:08:48.276 EAL: Calling mem event callback 'spdk:(nil)' 00:08:48.276 EAL: request: mp_malloc_sync 00:08:48.276 EAL: No shared files mode enabled, IPC is disabled 00:08:48.276 EAL: Heap on socket 0 was shrunk by 10MB 00:08:48.276 EAL: Trying to obtain current memory policy. 00:08:48.276 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:48.276 EAL: Restoring previous memory policy: 4 00:08:48.276 EAL: Calling mem event callback 'spdk:(nil)' 00:08:48.276 EAL: request: mp_malloc_sync 00:08:48.276 EAL: No shared files mode enabled, IPC is disabled 00:08:48.276 EAL: Heap on socket 0 was expanded by 18MB 00:08:48.276 EAL: Calling mem event callback 'spdk:(nil)' 00:08:48.276 EAL: request: mp_malloc_sync 00:08:48.276 EAL: No shared files mode enabled, IPC is disabled 00:08:48.276 EAL: Heap on socket 0 was shrunk by 18MB 00:08:48.276 EAL: Trying to obtain current memory policy. 00:08:48.276 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:48.276 EAL: Restoring previous memory policy: 4 00:08:48.276 EAL: Calling mem event callback 'spdk:(nil)' 00:08:48.276 EAL: request: mp_malloc_sync 00:08:48.276 EAL: No shared files mode enabled, IPC is disabled 00:08:48.276 EAL: Heap on socket 0 was expanded by 34MB 00:08:48.276 EAL: Calling mem event callback 'spdk:(nil)' 00:08:48.276 EAL: request: mp_malloc_sync 00:08:48.276 EAL: No shared files mode enabled, IPC is disabled 00:08:48.276 EAL: Heap on socket 0 was shrunk by 34MB 00:08:48.534 EAL: Trying to obtain current memory policy. 00:08:48.534 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:48.534 EAL: Restoring previous memory policy: 4 00:08:48.534 EAL: Calling mem event callback 'spdk:(nil)' 00:08:48.534 EAL: request: mp_malloc_sync 00:08:48.534 EAL: No shared files mode enabled, IPC is disabled 00:08:48.534 EAL: Heap on socket 0 was expanded by 66MB 00:08:48.534 EAL: Calling mem event callback 'spdk:(nil)' 00:08:48.534 EAL: request: mp_malloc_sync 00:08:48.534 EAL: No shared files mode enabled, IPC is disabled 00:08:48.534 EAL: Heap on socket 0 was shrunk by 66MB 00:08:48.534 EAL: Trying to obtain current memory policy. 00:08:48.534 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:48.792 EAL: Restoring previous memory policy: 4 00:08:48.792 EAL: Calling mem event callback 'spdk:(nil)' 00:08:48.792 EAL: request: mp_malloc_sync 00:08:48.792 EAL: No shared files mode enabled, IPC is disabled 00:08:48.792 EAL: Heap on socket 0 was expanded by 130MB 00:08:48.792 EAL: Calling mem event callback 'spdk:(nil)' 00:08:48.792 EAL: request: mp_malloc_sync 00:08:48.792 EAL: No shared files mode enabled, IPC is disabled 00:08:48.792 EAL: Heap on socket 0 was shrunk by 130MB 00:08:49.050 EAL: Trying to obtain current memory policy. 00:08:49.050 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:49.050 EAL: Restoring previous memory policy: 4 00:08:49.050 EAL: Calling mem event callback 'spdk:(nil)' 00:08:49.050 EAL: request: mp_malloc_sync 00:08:49.050 EAL: No shared files mode enabled, IPC is disabled 00:08:49.050 EAL: Heap on socket 0 was expanded by 258MB 00:08:49.618 EAL: Calling mem event callback 'spdk:(nil)' 00:08:49.618 EAL: request: mp_malloc_sync 00:08:49.618 EAL: No shared files mode enabled, IPC is disabled 00:08:49.618 EAL: Heap on socket 0 was shrunk by 258MB 00:08:49.877 EAL: Trying to obtain current memory policy. 00:08:49.877 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:50.136 EAL: Restoring previous memory policy: 4 00:08:50.136 EAL: Calling mem event callback 'spdk:(nil)' 00:08:50.136 EAL: request: mp_malloc_sync 00:08:50.136 EAL: No shared files mode enabled, IPC is disabled 00:08:50.136 EAL: Heap on socket 0 was expanded by 514MB 00:08:51.072 EAL: Calling mem event callback 'spdk:(nil)' 00:08:51.072 EAL: request: mp_malloc_sync 00:08:51.072 EAL: No shared files mode enabled, IPC is disabled 00:08:51.072 EAL: Heap on socket 0 was shrunk by 514MB 00:08:52.009 EAL: Trying to obtain current memory policy. 00:08:52.009 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:52.009 EAL: Restoring previous memory policy: 4 00:08:52.009 EAL: Calling mem event callback 'spdk:(nil)' 00:08:52.009 EAL: request: mp_malloc_sync 00:08:52.009 EAL: No shared files mode enabled, IPC is disabled 00:08:52.009 EAL: Heap on socket 0 was expanded by 1026MB 00:08:53.913 EAL: Calling mem event callback 'spdk:(nil)' 00:08:53.913 EAL: request: mp_malloc_sync 00:08:53.913 EAL: No shared files mode enabled, IPC is disabled 00:08:53.913 EAL: Heap on socket 0 was shrunk by 1026MB 00:08:55.292 passed 00:08:55.292 00:08:55.292 Run Summary: Type Total Ran Passed Failed Inactive 00:08:55.292 suites 1 1 n/a 0 0 00:08:55.292 tests 2 2 2 0 0 00:08:55.292 asserts 5691 5691 5691 0 n/a 00:08:55.292 00:08:55.292 Elapsed time = 7.642 seconds 00:08:55.292 EAL: Calling mem event callback 'spdk:(nil)' 00:08:55.292 EAL: request: mp_malloc_sync 00:08:55.292 EAL: No shared files mode enabled, IPC is disabled 00:08:55.292 EAL: Heap on socket 0 was shrunk by 2MB 00:08:55.292 EAL: No shared files mode enabled, IPC is disabled 00:08:55.292 EAL: No shared files mode enabled, IPC is disabled 00:08:55.292 EAL: No shared files mode enabled, IPC is disabled 00:08:55.551 00:08:55.551 real 0m7.978s 00:08:55.551 user 0m6.757s 00:08:55.551 sys 0m1.050s 00:08:55.551 ************************************ 00:08:55.551 END TEST env_vtophys 00:08:55.551 ************************************ 00:08:55.551 12:07:51 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:55.551 12:07:51 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:08:55.551 12:07:51 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:08:55.551 12:07:51 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:55.551 12:07:51 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:55.551 12:07:51 env -- common/autotest_common.sh@10 -- # set +x 00:08:55.551 ************************************ 00:08:55.551 START TEST env_pci 00:08:55.551 ************************************ 00:08:55.551 12:07:51 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:08:55.551 00:08:55.551 00:08:55.551 CUnit - A unit testing framework for C - Version 2.1-3 00:08:55.551 http://cunit.sourceforge.net/ 00:08:55.551 00:08:55.551 00:08:55.551 Suite: pci 00:08:55.551 Test: pci_hook ...[2024-11-25 12:07:51.479026] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56728 has claimed it 00:08:55.551 passed 00:08:55.551 00:08:55.551 Run Summary: Type Total Ran Passed Failed Inactive 00:08:55.551 suites 1 1 n/a 0 0 00:08:55.551 tests 1 1 1 0 0 00:08:55.551 asserts 25 25 25 0 n/a 00:08:55.551 00:08:55.551 Elapsed time = 0.007 seconds 00:08:55.551 EAL: Cannot find device (10000:00:01.0) 00:08:55.551 EAL: Failed to attach device on primary process 00:08:55.551 00:08:55.551 real 0m0.086s 00:08:55.551 user 0m0.046s 00:08:55.551 sys 0m0.038s 00:08:55.551 12:07:51 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:55.551 ************************************ 00:08:55.551 END TEST env_pci 00:08:55.551 ************************************ 00:08:55.551 12:07:51 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:08:55.551 12:07:51 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:08:55.551 12:07:51 env -- env/env.sh@15 -- # uname 00:08:55.551 12:07:51 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:08:55.551 12:07:51 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:08:55.551 12:07:51 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:55.551 12:07:51 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:55.551 12:07:51 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:55.551 12:07:51 env -- common/autotest_common.sh@10 -- # set +x 00:08:55.551 ************************************ 00:08:55.551 START TEST env_dpdk_post_init 00:08:55.551 ************************************ 00:08:55.551 12:07:51 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:55.810 EAL: Detected CPU lcores: 10 00:08:55.810 EAL: Detected NUMA nodes: 1 00:08:55.810 EAL: Detected shared linkage of DPDK 00:08:55.810 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:55.810 EAL: Selected IOVA mode 'PA' 00:08:55.810 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:55.810 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:08:55.810 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:08:55.810 Starting DPDK initialization... 00:08:55.810 Starting SPDK post initialization... 00:08:55.810 SPDK NVMe probe 00:08:55.810 Attaching to 0000:00:10.0 00:08:55.810 Attaching to 0000:00:11.0 00:08:55.810 Attached to 0000:00:10.0 00:08:55.810 Attached to 0000:00:11.0 00:08:55.810 Cleaning up... 00:08:55.810 00:08:55.810 real 0m0.300s 00:08:55.810 user 0m0.098s 00:08:55.810 sys 0m0.100s 00:08:55.810 12:07:51 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:55.810 12:07:51 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:08:55.810 ************************************ 00:08:55.810 END TEST env_dpdk_post_init 00:08:55.810 ************************************ 00:08:56.069 12:07:51 env -- env/env.sh@26 -- # uname 00:08:56.069 12:07:51 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:08:56.069 12:07:51 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:08:56.069 12:07:51 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:56.069 12:07:51 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:56.069 12:07:51 env -- common/autotest_common.sh@10 -- # set +x 00:08:56.069 ************************************ 00:08:56.069 START TEST env_mem_callbacks 00:08:56.069 ************************************ 00:08:56.069 12:07:51 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:08:56.069 EAL: Detected CPU lcores: 10 00:08:56.069 EAL: Detected NUMA nodes: 1 00:08:56.069 EAL: Detected shared linkage of DPDK 00:08:56.069 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:56.069 EAL: Selected IOVA mode 'PA' 00:08:56.069 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:56.069 00:08:56.069 00:08:56.069 CUnit - A unit testing framework for C - Version 2.1-3 00:08:56.069 http://cunit.sourceforge.net/ 00:08:56.069 00:08:56.069 00:08:56.069 Suite: memory 00:08:56.069 Test: test ... 00:08:56.069 register 0x200000200000 2097152 00:08:56.069 malloc 3145728 00:08:56.069 register 0x200000400000 4194304 00:08:56.069 buf 0x2000004fffc0 len 3145728 PASSED 00:08:56.069 malloc 64 00:08:56.069 buf 0x2000004ffec0 len 64 PASSED 00:08:56.069 malloc 4194304 00:08:56.069 register 0x200000800000 6291456 00:08:56.069 buf 0x2000009fffc0 len 4194304 PASSED 00:08:56.069 free 0x2000004fffc0 3145728 00:08:56.069 free 0x2000004ffec0 64 00:08:56.069 unregister 0x200000400000 4194304 PASSED 00:08:56.328 free 0x2000009fffc0 4194304 00:08:56.328 unregister 0x200000800000 6291456 PASSED 00:08:56.328 malloc 8388608 00:08:56.328 register 0x200000400000 10485760 00:08:56.328 buf 0x2000005fffc0 len 8388608 PASSED 00:08:56.328 free 0x2000005fffc0 8388608 00:08:56.328 unregister 0x200000400000 10485760 PASSED 00:08:56.328 passed 00:08:56.328 00:08:56.328 Run Summary: Type Total Ran Passed Failed Inactive 00:08:56.328 suites 1 1 n/a 0 0 00:08:56.328 tests 1 1 1 0 0 00:08:56.328 asserts 15 15 15 0 n/a 00:08:56.328 00:08:56.328 Elapsed time = 0.062 seconds 00:08:56.328 00:08:56.329 real 0m0.274s 00:08:56.329 user 0m0.088s 00:08:56.329 sys 0m0.082s 00:08:56.329 12:07:52 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:56.329 ************************************ 00:08:56.329 12:07:52 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:08:56.329 END TEST env_mem_callbacks 00:08:56.329 ************************************ 00:08:56.329 ************************************ 00:08:56.329 END TEST env 00:08:56.329 ************************************ 00:08:56.329 00:08:56.329 real 0m9.481s 00:08:56.329 user 0m7.547s 00:08:56.329 sys 0m1.539s 00:08:56.329 12:07:52 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:56.329 12:07:52 env -- common/autotest_common.sh@10 -- # set +x 00:08:56.329 12:07:52 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:08:56.329 12:07:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:56.329 12:07:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:56.329 12:07:52 -- common/autotest_common.sh@10 -- # set +x 00:08:56.329 ************************************ 00:08:56.329 START TEST rpc 00:08:56.329 ************************************ 00:08:56.329 12:07:52 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:08:56.329 * Looking for test storage... 00:08:56.329 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:08:56.329 12:07:52 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:56.329 12:07:52 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:08:56.329 12:07:52 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:56.588 12:07:52 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:56.588 12:07:52 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:56.588 12:07:52 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:56.588 12:07:52 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:56.588 12:07:52 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:56.588 12:07:52 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:56.588 12:07:52 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:56.588 12:07:52 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:56.588 12:07:52 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:56.588 12:07:52 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:56.588 12:07:52 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:56.588 12:07:52 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:56.588 12:07:52 rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:56.588 12:07:52 rpc -- scripts/common.sh@345 -- # : 1 00:08:56.588 12:07:52 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:56.588 12:07:52 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:56.588 12:07:52 rpc -- scripts/common.sh@365 -- # decimal 1 00:08:56.588 12:07:52 rpc -- scripts/common.sh@353 -- # local d=1 00:08:56.588 12:07:52 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:56.588 12:07:52 rpc -- scripts/common.sh@355 -- # echo 1 00:08:56.588 12:07:52 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:56.588 12:07:52 rpc -- scripts/common.sh@366 -- # decimal 2 00:08:56.588 12:07:52 rpc -- scripts/common.sh@353 -- # local d=2 00:08:56.588 12:07:52 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:56.588 12:07:52 rpc -- scripts/common.sh@355 -- # echo 2 00:08:56.588 12:07:52 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:56.588 12:07:52 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:56.588 12:07:52 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:56.588 12:07:52 rpc -- scripts/common.sh@368 -- # return 0 00:08:56.588 12:07:52 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:56.588 12:07:52 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:56.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.588 --rc genhtml_branch_coverage=1 00:08:56.588 --rc genhtml_function_coverage=1 00:08:56.588 --rc genhtml_legend=1 00:08:56.588 --rc geninfo_all_blocks=1 00:08:56.588 --rc geninfo_unexecuted_blocks=1 00:08:56.588 00:08:56.588 ' 00:08:56.588 12:07:52 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:56.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.588 --rc genhtml_branch_coverage=1 00:08:56.588 --rc genhtml_function_coverage=1 00:08:56.588 --rc genhtml_legend=1 00:08:56.588 --rc geninfo_all_blocks=1 00:08:56.588 --rc geninfo_unexecuted_blocks=1 00:08:56.588 00:08:56.588 ' 00:08:56.588 12:07:52 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:56.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.588 --rc genhtml_branch_coverage=1 00:08:56.588 --rc genhtml_function_coverage=1 00:08:56.588 --rc genhtml_legend=1 00:08:56.588 --rc geninfo_all_blocks=1 00:08:56.588 --rc geninfo_unexecuted_blocks=1 00:08:56.588 00:08:56.588 ' 00:08:56.588 12:07:52 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:56.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.588 --rc genhtml_branch_coverage=1 00:08:56.588 --rc genhtml_function_coverage=1 00:08:56.588 --rc genhtml_legend=1 00:08:56.588 --rc geninfo_all_blocks=1 00:08:56.588 --rc geninfo_unexecuted_blocks=1 00:08:56.588 00:08:56.588 ' 00:08:56.588 12:07:52 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56855 00:08:56.588 12:07:52 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:08:56.588 12:07:52 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:56.588 12:07:52 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56855 00:08:56.588 12:07:52 rpc -- common/autotest_common.sh@835 -- # '[' -z 56855 ']' 00:08:56.588 12:07:52 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.588 12:07:52 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:56.588 12:07:52 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.588 12:07:52 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:56.588 12:07:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.588 [2024-11-25 12:07:52.647984] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:08:56.588 [2024-11-25 12:07:52.648384] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56855 ] 00:08:56.847 [2024-11-25 12:07:52.846464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.107 [2024-11-25 12:07:53.004350] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:08:57.107 [2024-11-25 12:07:53.004687] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56855' to capture a snapshot of events at runtime. 00:08:57.107 [2024-11-25 12:07:53.004864] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:57.107 [2024-11-25 12:07:53.005149] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:57.107 [2024-11-25 12:07:53.005217] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56855 for offline analysis/debug. 00:08:57.107 [2024-11-25 12:07:53.007015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.044 12:07:53 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:58.044 12:07:53 rpc -- common/autotest_common.sh@868 -- # return 0 00:08:58.044 12:07:53 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:08:58.044 12:07:53 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:08:58.044 12:07:53 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:08:58.044 12:07:53 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:08:58.044 12:07:53 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:58.044 12:07:53 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:58.044 12:07:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:58.044 ************************************ 00:08:58.044 START TEST rpc_integrity 00:08:58.044 ************************************ 00:08:58.044 12:07:53 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:08:58.044 12:07:53 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:58.044 12:07:53 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.044 12:07:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:58.044 12:07:53 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.044 12:07:53 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:58.044 12:07:53 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:58.044 12:07:53 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:58.044 12:07:53 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:58.044 12:07:53 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.044 12:07:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:58.044 12:07:53 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.044 12:07:53 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:08:58.044 12:07:53 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:58.044 12:07:53 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.044 12:07:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:58.044 12:07:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.044 12:07:54 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:58.044 { 00:08:58.044 "name": "Malloc0", 00:08:58.044 "aliases": [ 00:08:58.044 "8e099323-1742-42eb-a47e-d0ee85b4df62" 00:08:58.044 ], 00:08:58.044 "product_name": "Malloc disk", 00:08:58.044 "block_size": 512, 00:08:58.044 "num_blocks": 16384, 00:08:58.044 "uuid": "8e099323-1742-42eb-a47e-d0ee85b4df62", 00:08:58.044 "assigned_rate_limits": { 00:08:58.044 "rw_ios_per_sec": 0, 00:08:58.044 "rw_mbytes_per_sec": 0, 00:08:58.044 "r_mbytes_per_sec": 0, 00:08:58.044 "w_mbytes_per_sec": 0 00:08:58.044 }, 00:08:58.044 "claimed": false, 00:08:58.044 "zoned": false, 00:08:58.044 "supported_io_types": { 00:08:58.044 "read": true, 00:08:58.044 "write": true, 00:08:58.044 "unmap": true, 00:08:58.044 "flush": true, 00:08:58.044 "reset": true, 00:08:58.044 "nvme_admin": false, 00:08:58.044 "nvme_io": false, 00:08:58.044 "nvme_io_md": false, 00:08:58.044 "write_zeroes": true, 00:08:58.044 "zcopy": true, 00:08:58.044 "get_zone_info": false, 00:08:58.044 "zone_management": false, 00:08:58.044 "zone_append": false, 00:08:58.044 "compare": false, 00:08:58.044 "compare_and_write": false, 00:08:58.044 "abort": true, 00:08:58.044 "seek_hole": false, 00:08:58.044 "seek_data": false, 00:08:58.044 "copy": true, 00:08:58.044 "nvme_iov_md": false 00:08:58.044 }, 00:08:58.044 "memory_domains": [ 00:08:58.044 { 00:08:58.044 "dma_device_id": "system", 00:08:58.044 "dma_device_type": 1 00:08:58.044 }, 00:08:58.044 { 00:08:58.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.044 "dma_device_type": 2 00:08:58.044 } 00:08:58.044 ], 00:08:58.044 "driver_specific": {} 00:08:58.044 } 00:08:58.044 ]' 00:08:58.044 12:07:54 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:58.044 12:07:54 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:58.044 12:07:54 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:08:58.044 12:07:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.044 12:07:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:58.044 [2024-11-25 12:07:54.067896] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:08:58.044 [2024-11-25 12:07:54.067995] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:58.044 [2024-11-25 12:07:54.068036] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:08:58.044 [2024-11-25 12:07:54.068061] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:58.044 [2024-11-25 12:07:54.071168] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:58.044 [2024-11-25 12:07:54.071386] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:58.044 Passthru0 00:08:58.044 12:07:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.044 12:07:54 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:58.044 12:07:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.044 12:07:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:58.044 12:07:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.044 12:07:54 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:58.044 { 00:08:58.044 "name": "Malloc0", 00:08:58.044 "aliases": [ 00:08:58.044 "8e099323-1742-42eb-a47e-d0ee85b4df62" 00:08:58.044 ], 00:08:58.044 "product_name": "Malloc disk", 00:08:58.044 "block_size": 512, 00:08:58.044 "num_blocks": 16384, 00:08:58.044 "uuid": "8e099323-1742-42eb-a47e-d0ee85b4df62", 00:08:58.044 "assigned_rate_limits": { 00:08:58.044 "rw_ios_per_sec": 0, 00:08:58.044 "rw_mbytes_per_sec": 0, 00:08:58.044 "r_mbytes_per_sec": 0, 00:08:58.044 "w_mbytes_per_sec": 0 00:08:58.044 }, 00:08:58.044 "claimed": true, 00:08:58.044 "claim_type": "exclusive_write", 00:08:58.044 "zoned": false, 00:08:58.044 "supported_io_types": { 00:08:58.044 "read": true, 00:08:58.044 "write": true, 00:08:58.044 "unmap": true, 00:08:58.044 "flush": true, 00:08:58.044 "reset": true, 00:08:58.044 "nvme_admin": false, 00:08:58.044 "nvme_io": false, 00:08:58.044 "nvme_io_md": false, 00:08:58.044 "write_zeroes": true, 00:08:58.044 "zcopy": true, 00:08:58.044 "get_zone_info": false, 00:08:58.044 "zone_management": false, 00:08:58.044 "zone_append": false, 00:08:58.044 "compare": false, 00:08:58.044 "compare_and_write": false, 00:08:58.044 "abort": true, 00:08:58.044 "seek_hole": false, 00:08:58.044 "seek_data": false, 00:08:58.044 "copy": true, 00:08:58.044 "nvme_iov_md": false 00:08:58.044 }, 00:08:58.044 "memory_domains": [ 00:08:58.044 { 00:08:58.044 "dma_device_id": "system", 00:08:58.044 "dma_device_type": 1 00:08:58.044 }, 00:08:58.044 { 00:08:58.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.044 "dma_device_type": 2 00:08:58.044 } 00:08:58.044 ], 00:08:58.044 "driver_specific": {} 00:08:58.044 }, 00:08:58.044 { 00:08:58.044 "name": "Passthru0", 00:08:58.044 "aliases": [ 00:08:58.044 "5ee9e293-7611-505c-a338-8185f8287878" 00:08:58.044 ], 00:08:58.044 "product_name": "passthru", 00:08:58.044 "block_size": 512, 00:08:58.044 "num_blocks": 16384, 00:08:58.044 "uuid": "5ee9e293-7611-505c-a338-8185f8287878", 00:08:58.044 "assigned_rate_limits": { 00:08:58.044 "rw_ios_per_sec": 0, 00:08:58.044 "rw_mbytes_per_sec": 0, 00:08:58.044 "r_mbytes_per_sec": 0, 00:08:58.044 "w_mbytes_per_sec": 0 00:08:58.044 }, 00:08:58.044 "claimed": false, 00:08:58.044 "zoned": false, 00:08:58.044 "supported_io_types": { 00:08:58.044 "read": true, 00:08:58.044 "write": true, 00:08:58.044 "unmap": true, 00:08:58.044 "flush": true, 00:08:58.044 "reset": true, 00:08:58.044 "nvme_admin": false, 00:08:58.044 "nvme_io": false, 00:08:58.044 "nvme_io_md": false, 00:08:58.044 "write_zeroes": true, 00:08:58.044 "zcopy": true, 00:08:58.045 "get_zone_info": false, 00:08:58.045 "zone_management": false, 00:08:58.045 "zone_append": false, 00:08:58.045 "compare": false, 00:08:58.045 "compare_and_write": false, 00:08:58.045 "abort": true, 00:08:58.045 "seek_hole": false, 00:08:58.045 "seek_data": false, 00:08:58.045 "copy": true, 00:08:58.045 "nvme_iov_md": false 00:08:58.045 }, 00:08:58.045 "memory_domains": [ 00:08:58.045 { 00:08:58.045 "dma_device_id": "system", 00:08:58.045 "dma_device_type": 1 00:08:58.045 }, 00:08:58.045 { 00:08:58.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.045 "dma_device_type": 2 00:08:58.045 } 00:08:58.045 ], 00:08:58.045 "driver_specific": { 00:08:58.045 "passthru": { 00:08:58.045 "name": "Passthru0", 00:08:58.045 "base_bdev_name": "Malloc0" 00:08:58.045 } 00:08:58.045 } 00:08:58.045 } 00:08:58.045 ]' 00:08:58.045 12:07:54 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:58.305 12:07:54 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:58.305 12:07:54 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:58.305 12:07:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.305 12:07:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:58.305 12:07:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.305 12:07:54 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:08:58.305 12:07:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.305 12:07:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:58.305 12:07:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.305 12:07:54 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:58.305 12:07:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.305 12:07:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:58.305 12:07:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.305 12:07:54 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:58.305 12:07:54 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:58.305 ************************************ 00:08:58.305 END TEST rpc_integrity 00:08:58.305 ************************************ 00:08:58.305 12:07:54 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:58.305 00:08:58.305 real 0m0.362s 00:08:58.305 user 0m0.214s 00:08:58.305 sys 0m0.055s 00:08:58.305 12:07:54 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:58.305 12:07:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:58.305 12:07:54 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:08:58.305 12:07:54 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:58.305 12:07:54 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:58.305 12:07:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:58.305 ************************************ 00:08:58.305 START TEST rpc_plugins 00:08:58.305 ************************************ 00:08:58.305 12:07:54 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:08:58.305 12:07:54 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:08:58.305 12:07:54 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.305 12:07:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:58.305 12:07:54 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.305 12:07:54 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:08:58.305 12:07:54 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:08:58.305 12:07:54 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.305 12:07:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:58.305 12:07:54 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.305 12:07:54 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:08:58.305 { 00:08:58.305 "name": "Malloc1", 00:08:58.305 "aliases": [ 00:08:58.305 "d3d9d775-07af-4dad-9fed-5f28153ed972" 00:08:58.305 ], 00:08:58.305 "product_name": "Malloc disk", 00:08:58.305 "block_size": 4096, 00:08:58.305 "num_blocks": 256, 00:08:58.305 "uuid": "d3d9d775-07af-4dad-9fed-5f28153ed972", 00:08:58.305 "assigned_rate_limits": { 00:08:58.305 "rw_ios_per_sec": 0, 00:08:58.305 "rw_mbytes_per_sec": 0, 00:08:58.305 "r_mbytes_per_sec": 0, 00:08:58.305 "w_mbytes_per_sec": 0 00:08:58.305 }, 00:08:58.305 "claimed": false, 00:08:58.305 "zoned": false, 00:08:58.305 "supported_io_types": { 00:08:58.305 "read": true, 00:08:58.305 "write": true, 00:08:58.305 "unmap": true, 00:08:58.305 "flush": true, 00:08:58.305 "reset": true, 00:08:58.305 "nvme_admin": false, 00:08:58.305 "nvme_io": false, 00:08:58.305 "nvme_io_md": false, 00:08:58.305 "write_zeroes": true, 00:08:58.305 "zcopy": true, 00:08:58.305 "get_zone_info": false, 00:08:58.305 "zone_management": false, 00:08:58.305 "zone_append": false, 00:08:58.305 "compare": false, 00:08:58.305 "compare_and_write": false, 00:08:58.305 "abort": true, 00:08:58.305 "seek_hole": false, 00:08:58.305 "seek_data": false, 00:08:58.305 "copy": true, 00:08:58.305 "nvme_iov_md": false 00:08:58.305 }, 00:08:58.305 "memory_domains": [ 00:08:58.305 { 00:08:58.305 "dma_device_id": "system", 00:08:58.305 "dma_device_type": 1 00:08:58.305 }, 00:08:58.305 { 00:08:58.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.305 "dma_device_type": 2 00:08:58.305 } 00:08:58.305 ], 00:08:58.305 "driver_specific": {} 00:08:58.305 } 00:08:58.305 ]' 00:08:58.305 12:07:54 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:08:58.565 12:07:54 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:08:58.565 12:07:54 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:08:58.565 12:07:54 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.565 12:07:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:58.565 12:07:54 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.565 12:07:54 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:08:58.565 12:07:54 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.565 12:07:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:58.565 12:07:54 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.565 12:07:54 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:08:58.565 12:07:54 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:08:58.565 ************************************ 00:08:58.565 END TEST rpc_plugins 00:08:58.565 ************************************ 00:08:58.565 12:07:54 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:08:58.565 00:08:58.565 real 0m0.170s 00:08:58.565 user 0m0.103s 00:08:58.565 sys 0m0.024s 00:08:58.565 12:07:54 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:58.565 12:07:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:58.565 12:07:54 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:08:58.565 12:07:54 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:58.565 12:07:54 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:58.565 12:07:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:58.565 ************************************ 00:08:58.565 START TEST rpc_trace_cmd_test 00:08:58.565 ************************************ 00:08:58.565 12:07:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:08:58.565 12:07:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:08:58.565 12:07:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:08:58.565 12:07:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.565 12:07:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.565 12:07:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.565 12:07:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:08:58.565 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56855", 00:08:58.565 "tpoint_group_mask": "0x8", 00:08:58.565 "iscsi_conn": { 00:08:58.565 "mask": "0x2", 00:08:58.565 "tpoint_mask": "0x0" 00:08:58.565 }, 00:08:58.565 "scsi": { 00:08:58.565 "mask": "0x4", 00:08:58.565 "tpoint_mask": "0x0" 00:08:58.565 }, 00:08:58.565 "bdev": { 00:08:58.565 "mask": "0x8", 00:08:58.565 "tpoint_mask": "0xffffffffffffffff" 00:08:58.565 }, 00:08:58.565 "nvmf_rdma": { 00:08:58.566 "mask": "0x10", 00:08:58.566 "tpoint_mask": "0x0" 00:08:58.566 }, 00:08:58.566 "nvmf_tcp": { 00:08:58.566 "mask": "0x20", 00:08:58.566 "tpoint_mask": "0x0" 00:08:58.566 }, 00:08:58.566 "ftl": { 00:08:58.566 "mask": "0x40", 00:08:58.566 "tpoint_mask": "0x0" 00:08:58.566 }, 00:08:58.566 "blobfs": { 00:08:58.566 "mask": "0x80", 00:08:58.566 "tpoint_mask": "0x0" 00:08:58.566 }, 00:08:58.566 "dsa": { 00:08:58.566 "mask": "0x200", 00:08:58.566 "tpoint_mask": "0x0" 00:08:58.566 }, 00:08:58.566 "thread": { 00:08:58.566 "mask": "0x400", 00:08:58.566 "tpoint_mask": "0x0" 00:08:58.566 }, 00:08:58.566 "nvme_pcie": { 00:08:58.566 "mask": "0x800", 00:08:58.566 "tpoint_mask": "0x0" 00:08:58.566 }, 00:08:58.566 "iaa": { 00:08:58.566 "mask": "0x1000", 00:08:58.566 "tpoint_mask": "0x0" 00:08:58.566 }, 00:08:58.566 "nvme_tcp": { 00:08:58.566 "mask": "0x2000", 00:08:58.566 "tpoint_mask": "0x0" 00:08:58.566 }, 00:08:58.566 "bdev_nvme": { 00:08:58.566 "mask": "0x4000", 00:08:58.566 "tpoint_mask": "0x0" 00:08:58.566 }, 00:08:58.566 "sock": { 00:08:58.566 "mask": "0x8000", 00:08:58.566 "tpoint_mask": "0x0" 00:08:58.566 }, 00:08:58.566 "blob": { 00:08:58.566 "mask": "0x10000", 00:08:58.566 "tpoint_mask": "0x0" 00:08:58.566 }, 00:08:58.566 "bdev_raid": { 00:08:58.566 "mask": "0x20000", 00:08:58.566 "tpoint_mask": "0x0" 00:08:58.566 }, 00:08:58.566 "scheduler": { 00:08:58.566 "mask": "0x40000", 00:08:58.566 "tpoint_mask": "0x0" 00:08:58.566 } 00:08:58.566 }' 00:08:58.566 12:07:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:08:58.566 12:07:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:08:58.566 12:07:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:08:58.825 12:07:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:08:58.825 12:07:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:08:58.825 12:07:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:08:58.825 12:07:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:08:58.825 12:07:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:08:58.825 12:07:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:08:58.825 ************************************ 00:08:58.825 END TEST rpc_trace_cmd_test 00:08:58.825 ************************************ 00:08:58.825 12:07:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:08:58.825 00:08:58.825 real 0m0.311s 00:08:58.825 user 0m0.271s 00:08:58.825 sys 0m0.030s 00:08:58.825 12:07:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:58.825 12:07:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.825 12:07:54 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:08:58.825 12:07:54 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:08:58.825 12:07:54 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:08:58.825 12:07:54 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:58.825 12:07:54 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:58.825 12:07:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:58.825 ************************************ 00:08:58.825 START TEST rpc_daemon_integrity 00:08:58.825 ************************************ 00:08:58.825 12:07:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:08:58.825 12:07:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:58.825 12:07:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.825 12:07:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:59.084 12:07:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.084 12:07:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:59.084 12:07:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:59.084 12:07:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:59.084 12:07:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:59.084 12:07:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.084 12:07:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:59.084 12:07:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.084 12:07:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:08:59.084 12:07:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:59.084 12:07:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.084 12:07:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:59.084 12:07:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.084 12:07:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:59.084 { 00:08:59.084 "name": "Malloc2", 00:08:59.084 "aliases": [ 00:08:59.084 "e8017c1e-bfa1-4887-86da-1b08d1cdd1e5" 00:08:59.084 ], 00:08:59.084 "product_name": "Malloc disk", 00:08:59.084 "block_size": 512, 00:08:59.084 "num_blocks": 16384, 00:08:59.084 "uuid": "e8017c1e-bfa1-4887-86da-1b08d1cdd1e5", 00:08:59.084 "assigned_rate_limits": { 00:08:59.084 "rw_ios_per_sec": 0, 00:08:59.084 "rw_mbytes_per_sec": 0, 00:08:59.084 "r_mbytes_per_sec": 0, 00:08:59.084 "w_mbytes_per_sec": 0 00:08:59.084 }, 00:08:59.084 "claimed": false, 00:08:59.084 "zoned": false, 00:08:59.084 "supported_io_types": { 00:08:59.084 "read": true, 00:08:59.084 "write": true, 00:08:59.084 "unmap": true, 00:08:59.084 "flush": true, 00:08:59.084 "reset": true, 00:08:59.084 "nvme_admin": false, 00:08:59.084 "nvme_io": false, 00:08:59.084 "nvme_io_md": false, 00:08:59.084 "write_zeroes": true, 00:08:59.084 "zcopy": true, 00:08:59.084 "get_zone_info": false, 00:08:59.084 "zone_management": false, 00:08:59.084 "zone_append": false, 00:08:59.084 "compare": false, 00:08:59.084 "compare_and_write": false, 00:08:59.084 "abort": true, 00:08:59.084 "seek_hole": false, 00:08:59.084 "seek_data": false, 00:08:59.084 "copy": true, 00:08:59.084 "nvme_iov_md": false 00:08:59.084 }, 00:08:59.085 "memory_domains": [ 00:08:59.085 { 00:08:59.085 "dma_device_id": "system", 00:08:59.085 "dma_device_type": 1 00:08:59.085 }, 00:08:59.085 { 00:08:59.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.085 "dma_device_type": 2 00:08:59.085 } 00:08:59.085 ], 00:08:59.085 "driver_specific": {} 00:08:59.085 } 00:08:59.085 ]' 00:08:59.085 12:07:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:59.085 12:07:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:59.085 12:07:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:08:59.085 12:07:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.085 12:07:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:59.085 [2024-11-25 12:07:55.066995] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:08:59.085 [2024-11-25 12:07:55.067232] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:59.085 [2024-11-25 12:07:55.067286] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:59.085 [2024-11-25 12:07:55.067312] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:59.085 [2024-11-25 12:07:55.070440] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:59.085 [2024-11-25 12:07:55.070494] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:59.085 Passthru0 00:08:59.085 12:07:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.085 12:07:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:59.085 12:07:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.085 12:07:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:59.085 12:07:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.085 12:07:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:59.085 { 00:08:59.085 "name": "Malloc2", 00:08:59.085 "aliases": [ 00:08:59.085 "e8017c1e-bfa1-4887-86da-1b08d1cdd1e5" 00:08:59.085 ], 00:08:59.085 "product_name": "Malloc disk", 00:08:59.085 "block_size": 512, 00:08:59.085 "num_blocks": 16384, 00:08:59.085 "uuid": "e8017c1e-bfa1-4887-86da-1b08d1cdd1e5", 00:08:59.085 "assigned_rate_limits": { 00:08:59.085 "rw_ios_per_sec": 0, 00:08:59.085 "rw_mbytes_per_sec": 0, 00:08:59.085 "r_mbytes_per_sec": 0, 00:08:59.085 "w_mbytes_per_sec": 0 00:08:59.085 }, 00:08:59.085 "claimed": true, 00:08:59.085 "claim_type": "exclusive_write", 00:08:59.085 "zoned": false, 00:08:59.085 "supported_io_types": { 00:08:59.085 "read": true, 00:08:59.085 "write": true, 00:08:59.085 "unmap": true, 00:08:59.085 "flush": true, 00:08:59.085 "reset": true, 00:08:59.085 "nvme_admin": false, 00:08:59.085 "nvme_io": false, 00:08:59.085 "nvme_io_md": false, 00:08:59.085 "write_zeroes": true, 00:08:59.085 "zcopy": true, 00:08:59.085 "get_zone_info": false, 00:08:59.085 "zone_management": false, 00:08:59.085 "zone_append": false, 00:08:59.085 "compare": false, 00:08:59.085 "compare_and_write": false, 00:08:59.085 "abort": true, 00:08:59.085 "seek_hole": false, 00:08:59.085 "seek_data": false, 00:08:59.085 "copy": true, 00:08:59.085 "nvme_iov_md": false 00:08:59.085 }, 00:08:59.085 "memory_domains": [ 00:08:59.085 { 00:08:59.085 "dma_device_id": "system", 00:08:59.085 "dma_device_type": 1 00:08:59.085 }, 00:08:59.085 { 00:08:59.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.085 "dma_device_type": 2 00:08:59.085 } 00:08:59.085 ], 00:08:59.085 "driver_specific": {} 00:08:59.085 }, 00:08:59.085 { 00:08:59.085 "name": "Passthru0", 00:08:59.085 "aliases": [ 00:08:59.085 "8a099c9f-34a3-5910-99ee-9e86d7253b74" 00:08:59.085 ], 00:08:59.085 "product_name": "passthru", 00:08:59.085 "block_size": 512, 00:08:59.085 "num_blocks": 16384, 00:08:59.085 "uuid": "8a099c9f-34a3-5910-99ee-9e86d7253b74", 00:08:59.085 "assigned_rate_limits": { 00:08:59.085 "rw_ios_per_sec": 0, 00:08:59.085 "rw_mbytes_per_sec": 0, 00:08:59.085 "r_mbytes_per_sec": 0, 00:08:59.085 "w_mbytes_per_sec": 0 00:08:59.085 }, 00:08:59.085 "claimed": false, 00:08:59.085 "zoned": false, 00:08:59.085 "supported_io_types": { 00:08:59.085 "read": true, 00:08:59.085 "write": true, 00:08:59.085 "unmap": true, 00:08:59.085 "flush": true, 00:08:59.085 "reset": true, 00:08:59.085 "nvme_admin": false, 00:08:59.085 "nvme_io": false, 00:08:59.085 "nvme_io_md": false, 00:08:59.085 "write_zeroes": true, 00:08:59.085 "zcopy": true, 00:08:59.085 "get_zone_info": false, 00:08:59.085 "zone_management": false, 00:08:59.085 "zone_append": false, 00:08:59.085 "compare": false, 00:08:59.085 "compare_and_write": false, 00:08:59.085 "abort": true, 00:08:59.085 "seek_hole": false, 00:08:59.085 "seek_data": false, 00:08:59.085 "copy": true, 00:08:59.085 "nvme_iov_md": false 00:08:59.085 }, 00:08:59.085 "memory_domains": [ 00:08:59.085 { 00:08:59.085 "dma_device_id": "system", 00:08:59.085 "dma_device_type": 1 00:08:59.085 }, 00:08:59.085 { 00:08:59.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.085 "dma_device_type": 2 00:08:59.085 } 00:08:59.085 ], 00:08:59.085 "driver_specific": { 00:08:59.085 "passthru": { 00:08:59.085 "name": "Passthru0", 00:08:59.085 "base_bdev_name": "Malloc2" 00:08:59.085 } 00:08:59.085 } 00:08:59.085 } 00:08:59.085 ]' 00:08:59.085 12:07:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:59.085 12:07:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:59.085 12:07:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:59.085 12:07:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.085 12:07:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:59.085 12:07:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.085 12:07:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:08:59.085 12:07:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.085 12:07:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:59.391 12:07:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.391 12:07:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:59.391 12:07:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.391 12:07:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:59.391 12:07:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.391 12:07:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:59.391 12:07:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:59.391 ************************************ 00:08:59.391 END TEST rpc_daemon_integrity 00:08:59.391 ************************************ 00:08:59.391 12:07:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:59.391 00:08:59.391 real 0m0.360s 00:08:59.391 user 0m0.214s 00:08:59.391 sys 0m0.046s 00:08:59.391 12:07:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:59.391 12:07:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:59.391 12:07:55 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:08:59.391 12:07:55 rpc -- rpc/rpc.sh@84 -- # killprocess 56855 00:08:59.391 12:07:55 rpc -- common/autotest_common.sh@954 -- # '[' -z 56855 ']' 00:08:59.391 12:07:55 rpc -- common/autotest_common.sh@958 -- # kill -0 56855 00:08:59.391 12:07:55 rpc -- common/autotest_common.sh@959 -- # uname 00:08:59.391 12:07:55 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:59.391 12:07:55 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56855 00:08:59.391 killing process with pid 56855 00:08:59.391 12:07:55 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:59.391 12:07:55 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:59.392 12:07:55 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56855' 00:08:59.392 12:07:55 rpc -- common/autotest_common.sh@973 -- # kill 56855 00:08:59.392 12:07:55 rpc -- common/autotest_common.sh@978 -- # wait 56855 00:09:01.939 ************************************ 00:09:01.939 END TEST rpc 00:09:01.939 ************************************ 00:09:01.939 00:09:01.939 real 0m5.286s 00:09:01.939 user 0m5.974s 00:09:01.939 sys 0m0.949s 00:09:01.939 12:07:57 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:01.939 12:07:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.939 12:07:57 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:09:01.939 12:07:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:01.939 12:07:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:01.939 12:07:57 -- common/autotest_common.sh@10 -- # set +x 00:09:01.939 ************************************ 00:09:01.939 START TEST skip_rpc 00:09:01.939 ************************************ 00:09:01.939 12:07:57 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:09:01.939 * Looking for test storage... 00:09:01.939 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:09:01.939 12:07:57 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:01.939 12:07:57 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:01.939 12:07:57 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:01.939 12:07:57 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:01.939 12:07:57 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:01.939 12:07:57 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:01.939 12:07:57 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:01.939 12:07:57 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:01.939 12:07:57 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:01.939 12:07:57 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:01.939 12:07:57 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:01.939 12:07:57 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:01.939 12:07:57 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:01.939 12:07:57 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:01.939 12:07:57 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:01.939 12:07:57 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:01.939 12:07:57 skip_rpc -- scripts/common.sh@345 -- # : 1 00:09:01.939 12:07:57 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:01.939 12:07:57 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:01.939 12:07:57 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:01.939 12:07:57 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:09:01.939 12:07:57 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:01.939 12:07:57 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:09:01.939 12:07:57 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:01.939 12:07:57 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:01.939 12:07:57 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:09:01.939 12:07:57 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:01.939 12:07:57 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:09:01.939 12:07:57 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:01.939 12:07:57 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:01.939 12:07:57 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:01.939 12:07:57 skip_rpc -- scripts/common.sh@368 -- # return 0 00:09:01.939 12:07:57 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:01.939 12:07:57 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:01.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.939 --rc genhtml_branch_coverage=1 00:09:01.939 --rc genhtml_function_coverage=1 00:09:01.939 --rc genhtml_legend=1 00:09:01.939 --rc geninfo_all_blocks=1 00:09:01.939 --rc geninfo_unexecuted_blocks=1 00:09:01.939 00:09:01.939 ' 00:09:01.939 12:07:57 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:01.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.939 --rc genhtml_branch_coverage=1 00:09:01.939 --rc genhtml_function_coverage=1 00:09:01.939 --rc genhtml_legend=1 00:09:01.939 --rc geninfo_all_blocks=1 00:09:01.939 --rc geninfo_unexecuted_blocks=1 00:09:01.939 00:09:01.939 ' 00:09:01.939 12:07:57 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:01.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.939 --rc genhtml_branch_coverage=1 00:09:01.939 --rc genhtml_function_coverage=1 00:09:01.939 --rc genhtml_legend=1 00:09:01.939 --rc geninfo_all_blocks=1 00:09:01.939 --rc geninfo_unexecuted_blocks=1 00:09:01.939 00:09:01.939 ' 00:09:01.939 12:07:57 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:01.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.939 --rc genhtml_branch_coverage=1 00:09:01.939 --rc genhtml_function_coverage=1 00:09:01.939 --rc genhtml_legend=1 00:09:01.939 --rc geninfo_all_blocks=1 00:09:01.939 --rc geninfo_unexecuted_blocks=1 00:09:01.939 00:09:01.939 ' 00:09:01.939 12:07:57 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:01.939 12:07:57 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:01.939 12:07:57 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:09:01.939 12:07:57 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:01.939 12:07:57 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:01.939 12:07:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.939 ************************************ 00:09:01.939 START TEST skip_rpc 00:09:01.939 ************************************ 00:09:01.939 12:07:57 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:09:01.939 12:07:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57089 00:09:01.939 12:07:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:09:01.939 12:07:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:01.939 12:07:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:09:01.939 [2024-11-25 12:07:57.974665] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:09:01.939 [2024-11-25 12:07:57.975106] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57089 ] 00:09:02.198 [2024-11-25 12:07:58.157974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.457 [2024-11-25 12:07:58.292719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.728 12:08:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:09:07.728 12:08:02 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:09:07.728 12:08:02 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:09:07.728 12:08:02 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:07.728 12:08:02 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:07.729 12:08:02 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:07.729 12:08:02 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:07.729 12:08:02 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:09:07.729 12:08:02 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.729 12:08:02 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.729 12:08:02 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:07.729 12:08:02 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:09:07.729 12:08:02 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:07.729 12:08:02 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:07.729 12:08:02 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:07.729 12:08:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:09:07.729 12:08:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57089 00:09:07.729 12:08:02 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57089 ']' 00:09:07.729 12:08:02 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57089 00:09:07.729 12:08:02 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:09:07.729 12:08:02 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:07.729 12:08:02 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57089 00:09:07.729 12:08:02 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:07.729 12:08:02 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:07.729 12:08:02 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57089' 00:09:07.729 killing process with pid 57089 00:09:07.729 12:08:02 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57089 00:09:07.729 12:08:02 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57089 00:09:09.176 00:09:09.176 real 0m7.261s 00:09:09.176 user 0m6.700s 00:09:09.176 sys 0m0.456s 00:09:09.176 12:08:05 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:09.176 ************************************ 00:09:09.176 END TEST skip_rpc 00:09:09.176 ************************************ 00:09:09.176 12:08:05 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:09.176 12:08:05 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:09:09.177 12:08:05 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:09.177 12:08:05 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:09.177 12:08:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:09.177 ************************************ 00:09:09.177 START TEST skip_rpc_with_json 00:09:09.177 ************************************ 00:09:09.177 12:08:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:09:09.177 12:08:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:09:09.177 12:08:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57193 00:09:09.177 12:08:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:09.177 12:08:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57193 00:09:09.177 12:08:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:09.177 12:08:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57193 ']' 00:09:09.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.177 12:08:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.177 12:08:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:09.177 12:08:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.177 12:08:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:09.177 12:08:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:09.435 [2024-11-25 12:08:05.290080] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:09:09.435 [2024-11-25 12:08:05.290633] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57193 ] 00:09:09.435 [2024-11-25 12:08:05.476715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.694 [2024-11-25 12:08:05.612734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.630 12:08:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:10.630 12:08:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:09:10.630 12:08:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:09:10.630 12:08:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.630 12:08:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:10.630 [2024-11-25 12:08:06.474836] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:09:10.630 request: 00:09:10.630 { 00:09:10.630 "trtype": "tcp", 00:09:10.630 "method": "nvmf_get_transports", 00:09:10.630 "req_id": 1 00:09:10.630 } 00:09:10.630 Got JSON-RPC error response 00:09:10.630 response: 00:09:10.630 { 00:09:10.630 "code": -19, 00:09:10.630 "message": "No such device" 00:09:10.630 } 00:09:10.630 12:08:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:10.630 12:08:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:09:10.630 12:08:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.630 12:08:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:10.630 [2024-11-25 12:08:06.487006] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:10.630 12:08:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.630 12:08:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:09:10.630 12:08:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.630 12:08:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:10.630 12:08:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.630 12:08:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:10.630 { 00:09:10.630 "subsystems": [ 00:09:10.630 { 00:09:10.630 "subsystem": "fsdev", 00:09:10.630 "config": [ 00:09:10.630 { 00:09:10.630 "method": "fsdev_set_opts", 00:09:10.630 "params": { 00:09:10.630 "fsdev_io_pool_size": 65535, 00:09:10.630 "fsdev_io_cache_size": 256 00:09:10.630 } 00:09:10.630 } 00:09:10.630 ] 00:09:10.630 }, 00:09:10.630 { 00:09:10.630 "subsystem": "keyring", 00:09:10.630 "config": [] 00:09:10.630 }, 00:09:10.630 { 00:09:10.630 "subsystem": "iobuf", 00:09:10.630 "config": [ 00:09:10.630 { 00:09:10.630 "method": "iobuf_set_options", 00:09:10.630 "params": { 00:09:10.630 "small_pool_count": 8192, 00:09:10.630 "large_pool_count": 1024, 00:09:10.630 "small_bufsize": 8192, 00:09:10.630 "large_bufsize": 135168, 00:09:10.630 "enable_numa": false 00:09:10.630 } 00:09:10.630 } 00:09:10.630 ] 00:09:10.630 }, 00:09:10.630 { 00:09:10.630 "subsystem": "sock", 00:09:10.630 "config": [ 00:09:10.630 { 00:09:10.630 "method": "sock_set_default_impl", 00:09:10.630 "params": { 00:09:10.630 "impl_name": "posix" 00:09:10.630 } 00:09:10.630 }, 00:09:10.630 { 00:09:10.630 "method": "sock_impl_set_options", 00:09:10.630 "params": { 00:09:10.630 "impl_name": "ssl", 00:09:10.630 "recv_buf_size": 4096, 00:09:10.630 "send_buf_size": 4096, 00:09:10.630 "enable_recv_pipe": true, 00:09:10.630 "enable_quickack": false, 00:09:10.630 "enable_placement_id": 0, 00:09:10.630 "enable_zerocopy_send_server": true, 00:09:10.630 "enable_zerocopy_send_client": false, 00:09:10.630 "zerocopy_threshold": 0, 00:09:10.630 "tls_version": 0, 00:09:10.630 "enable_ktls": false 00:09:10.630 } 00:09:10.630 }, 00:09:10.630 { 00:09:10.630 "method": "sock_impl_set_options", 00:09:10.630 "params": { 00:09:10.630 "impl_name": "posix", 00:09:10.630 "recv_buf_size": 2097152, 00:09:10.630 "send_buf_size": 2097152, 00:09:10.630 "enable_recv_pipe": true, 00:09:10.630 "enable_quickack": false, 00:09:10.630 "enable_placement_id": 0, 00:09:10.630 "enable_zerocopy_send_server": true, 00:09:10.630 "enable_zerocopy_send_client": false, 00:09:10.630 "zerocopy_threshold": 0, 00:09:10.630 "tls_version": 0, 00:09:10.630 "enable_ktls": false 00:09:10.630 } 00:09:10.630 } 00:09:10.630 ] 00:09:10.630 }, 00:09:10.630 { 00:09:10.630 "subsystem": "vmd", 00:09:10.630 "config": [] 00:09:10.630 }, 00:09:10.630 { 00:09:10.630 "subsystem": "accel", 00:09:10.630 "config": [ 00:09:10.630 { 00:09:10.630 "method": "accel_set_options", 00:09:10.630 "params": { 00:09:10.630 "small_cache_size": 128, 00:09:10.630 "large_cache_size": 16, 00:09:10.630 "task_count": 2048, 00:09:10.630 "sequence_count": 2048, 00:09:10.630 "buf_count": 2048 00:09:10.630 } 00:09:10.630 } 00:09:10.630 ] 00:09:10.630 }, 00:09:10.630 { 00:09:10.630 "subsystem": "bdev", 00:09:10.630 "config": [ 00:09:10.630 { 00:09:10.630 "method": "bdev_set_options", 00:09:10.630 "params": { 00:09:10.630 "bdev_io_pool_size": 65535, 00:09:10.630 "bdev_io_cache_size": 256, 00:09:10.630 "bdev_auto_examine": true, 00:09:10.630 "iobuf_small_cache_size": 128, 00:09:10.630 "iobuf_large_cache_size": 16 00:09:10.630 } 00:09:10.630 }, 00:09:10.630 { 00:09:10.630 "method": "bdev_raid_set_options", 00:09:10.630 "params": { 00:09:10.630 "process_window_size_kb": 1024, 00:09:10.630 "process_max_bandwidth_mb_sec": 0 00:09:10.630 } 00:09:10.630 }, 00:09:10.630 { 00:09:10.630 "method": "bdev_iscsi_set_options", 00:09:10.630 "params": { 00:09:10.630 "timeout_sec": 30 00:09:10.630 } 00:09:10.630 }, 00:09:10.630 { 00:09:10.631 "method": "bdev_nvme_set_options", 00:09:10.631 "params": { 00:09:10.631 "action_on_timeout": "none", 00:09:10.631 "timeout_us": 0, 00:09:10.631 "timeout_admin_us": 0, 00:09:10.631 "keep_alive_timeout_ms": 10000, 00:09:10.631 "arbitration_burst": 0, 00:09:10.631 "low_priority_weight": 0, 00:09:10.631 "medium_priority_weight": 0, 00:09:10.631 "high_priority_weight": 0, 00:09:10.631 "nvme_adminq_poll_period_us": 10000, 00:09:10.631 "nvme_ioq_poll_period_us": 0, 00:09:10.631 "io_queue_requests": 0, 00:09:10.631 "delay_cmd_submit": true, 00:09:10.631 "transport_retry_count": 4, 00:09:10.631 "bdev_retry_count": 3, 00:09:10.631 "transport_ack_timeout": 0, 00:09:10.631 "ctrlr_loss_timeout_sec": 0, 00:09:10.631 "reconnect_delay_sec": 0, 00:09:10.631 "fast_io_fail_timeout_sec": 0, 00:09:10.631 "disable_auto_failback": false, 00:09:10.631 "generate_uuids": false, 00:09:10.631 "transport_tos": 0, 00:09:10.631 "nvme_error_stat": false, 00:09:10.631 "rdma_srq_size": 0, 00:09:10.631 "io_path_stat": false, 00:09:10.631 "allow_accel_sequence": false, 00:09:10.631 "rdma_max_cq_size": 0, 00:09:10.631 "rdma_cm_event_timeout_ms": 0, 00:09:10.631 "dhchap_digests": [ 00:09:10.631 "sha256", 00:09:10.631 "sha384", 00:09:10.631 "sha512" 00:09:10.631 ], 00:09:10.631 "dhchap_dhgroups": [ 00:09:10.631 "null", 00:09:10.631 "ffdhe2048", 00:09:10.631 "ffdhe3072", 00:09:10.631 "ffdhe4096", 00:09:10.631 "ffdhe6144", 00:09:10.631 "ffdhe8192" 00:09:10.631 ] 00:09:10.631 } 00:09:10.631 }, 00:09:10.631 { 00:09:10.631 "method": "bdev_nvme_set_hotplug", 00:09:10.631 "params": { 00:09:10.631 "period_us": 100000, 00:09:10.631 "enable": false 00:09:10.631 } 00:09:10.631 }, 00:09:10.631 { 00:09:10.631 "method": "bdev_wait_for_examine" 00:09:10.631 } 00:09:10.631 ] 00:09:10.631 }, 00:09:10.631 { 00:09:10.631 "subsystem": "scsi", 00:09:10.631 "config": null 00:09:10.631 }, 00:09:10.631 { 00:09:10.631 "subsystem": "scheduler", 00:09:10.631 "config": [ 00:09:10.631 { 00:09:10.631 "method": "framework_set_scheduler", 00:09:10.631 "params": { 00:09:10.631 "name": "static" 00:09:10.631 } 00:09:10.631 } 00:09:10.631 ] 00:09:10.631 }, 00:09:10.631 { 00:09:10.631 "subsystem": "vhost_scsi", 00:09:10.631 "config": [] 00:09:10.631 }, 00:09:10.631 { 00:09:10.631 "subsystem": "vhost_blk", 00:09:10.631 "config": [] 00:09:10.631 }, 00:09:10.631 { 00:09:10.631 "subsystem": "ublk", 00:09:10.631 "config": [] 00:09:10.631 }, 00:09:10.631 { 00:09:10.631 "subsystem": "nbd", 00:09:10.631 "config": [] 00:09:10.631 }, 00:09:10.631 { 00:09:10.631 "subsystem": "nvmf", 00:09:10.631 "config": [ 00:09:10.631 { 00:09:10.631 "method": "nvmf_set_config", 00:09:10.631 "params": { 00:09:10.631 "discovery_filter": "match_any", 00:09:10.631 "admin_cmd_passthru": { 00:09:10.631 "identify_ctrlr": false 00:09:10.631 }, 00:09:10.631 "dhchap_digests": [ 00:09:10.631 "sha256", 00:09:10.631 "sha384", 00:09:10.631 "sha512" 00:09:10.631 ], 00:09:10.631 "dhchap_dhgroups": [ 00:09:10.631 "null", 00:09:10.631 "ffdhe2048", 00:09:10.631 "ffdhe3072", 00:09:10.631 "ffdhe4096", 00:09:10.631 "ffdhe6144", 00:09:10.631 "ffdhe8192" 00:09:10.631 ] 00:09:10.631 } 00:09:10.631 }, 00:09:10.631 { 00:09:10.631 "method": "nvmf_set_max_subsystems", 00:09:10.631 "params": { 00:09:10.631 "max_subsystems": 1024 00:09:10.631 } 00:09:10.631 }, 00:09:10.631 { 00:09:10.631 "method": "nvmf_set_crdt", 00:09:10.631 "params": { 00:09:10.631 "crdt1": 0, 00:09:10.631 "crdt2": 0, 00:09:10.631 "crdt3": 0 00:09:10.631 } 00:09:10.631 }, 00:09:10.631 { 00:09:10.631 "method": "nvmf_create_transport", 00:09:10.631 "params": { 00:09:10.631 "trtype": "TCP", 00:09:10.631 "max_queue_depth": 128, 00:09:10.631 "max_io_qpairs_per_ctrlr": 127, 00:09:10.631 "in_capsule_data_size": 4096, 00:09:10.631 "max_io_size": 131072, 00:09:10.631 "io_unit_size": 131072, 00:09:10.631 "max_aq_depth": 128, 00:09:10.631 "num_shared_buffers": 511, 00:09:10.631 "buf_cache_size": 4294967295, 00:09:10.631 "dif_insert_or_strip": false, 00:09:10.631 "zcopy": false, 00:09:10.631 "c2h_success": true, 00:09:10.631 "sock_priority": 0, 00:09:10.631 "abort_timeout_sec": 1, 00:09:10.631 "ack_timeout": 0, 00:09:10.631 "data_wr_pool_size": 0 00:09:10.631 } 00:09:10.631 } 00:09:10.631 ] 00:09:10.631 }, 00:09:10.631 { 00:09:10.631 "subsystem": "iscsi", 00:09:10.631 "config": [ 00:09:10.631 { 00:09:10.631 "method": "iscsi_set_options", 00:09:10.631 "params": { 00:09:10.631 "node_base": "iqn.2016-06.io.spdk", 00:09:10.631 "max_sessions": 128, 00:09:10.631 "max_connections_per_session": 2, 00:09:10.631 "max_queue_depth": 64, 00:09:10.631 "default_time2wait": 2, 00:09:10.631 "default_time2retain": 20, 00:09:10.631 "first_burst_length": 8192, 00:09:10.631 "immediate_data": true, 00:09:10.631 "allow_duplicated_isid": false, 00:09:10.631 "error_recovery_level": 0, 00:09:10.631 "nop_timeout": 60, 00:09:10.631 "nop_in_interval": 30, 00:09:10.631 "disable_chap": false, 00:09:10.631 "require_chap": false, 00:09:10.631 "mutual_chap": false, 00:09:10.631 "chap_group": 0, 00:09:10.631 "max_large_datain_per_connection": 64, 00:09:10.631 "max_r2t_per_connection": 4, 00:09:10.631 "pdu_pool_size": 36864, 00:09:10.631 "immediate_data_pool_size": 16384, 00:09:10.631 "data_out_pool_size": 2048 00:09:10.631 } 00:09:10.631 } 00:09:10.631 ] 00:09:10.631 } 00:09:10.631 ] 00:09:10.631 } 00:09:10.631 12:08:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:10.631 12:08:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57193 00:09:10.631 12:08:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57193 ']' 00:09:10.631 12:08:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57193 00:09:10.631 12:08:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:09:10.631 12:08:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:10.631 12:08:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57193 00:09:10.631 killing process with pid 57193 00:09:10.631 12:08:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:10.631 12:08:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:10.631 12:08:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57193' 00:09:10.631 12:08:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57193 00:09:10.631 12:08:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57193 00:09:13.177 12:08:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57244 00:09:13.177 12:08:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:13.177 12:08:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:09:18.475 12:08:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57244 00:09:18.475 12:08:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57244 ']' 00:09:18.475 12:08:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57244 00:09:18.475 12:08:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:09:18.475 12:08:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:18.475 12:08:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57244 00:09:18.475 killing process with pid 57244 00:09:18.475 12:08:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:18.475 12:08:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:18.475 12:08:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57244' 00:09:18.475 12:08:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57244 00:09:18.475 12:08:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57244 00:09:20.374 12:08:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:20.374 12:08:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:20.374 00:09:20.374 real 0m10.984s 00:09:20.374 user 0m10.395s 00:09:20.374 sys 0m0.983s 00:09:20.374 12:08:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:20.374 12:08:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:20.374 ************************************ 00:09:20.374 END TEST skip_rpc_with_json 00:09:20.374 ************************************ 00:09:20.374 12:08:16 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:09:20.374 12:08:16 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:20.374 12:08:16 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:20.374 12:08:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.374 ************************************ 00:09:20.374 START TEST skip_rpc_with_delay 00:09:20.374 ************************************ 00:09:20.374 12:08:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:09:20.374 12:08:16 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:20.374 12:08:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:09:20.374 12:08:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:20.374 12:08:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:20.374 12:08:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:20.374 12:08:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:20.374 12:08:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:20.374 12:08:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:20.374 12:08:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:20.374 12:08:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:20.374 12:08:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:09:20.374 12:08:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:20.374 [2024-11-25 12:08:16.298974] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:09:20.374 12:08:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:09:20.374 12:08:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:20.374 12:08:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:20.374 12:08:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:20.374 00:09:20.374 real 0m0.176s 00:09:20.374 user 0m0.097s 00:09:20.374 sys 0m0.078s 00:09:20.374 ************************************ 00:09:20.374 END TEST skip_rpc_with_delay 00:09:20.374 ************************************ 00:09:20.374 12:08:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:20.374 12:08:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:09:20.374 12:08:16 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:09:20.374 12:08:16 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:09:20.374 12:08:16 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:09:20.374 12:08:16 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:20.374 12:08:16 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:20.374 12:08:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.374 ************************************ 00:09:20.374 START TEST exit_on_failed_rpc_init 00:09:20.374 ************************************ 00:09:20.374 12:08:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:09:20.374 12:08:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57372 00:09:20.374 12:08:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57372 00:09:20.374 12:08:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:20.374 12:08:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57372 ']' 00:09:20.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.374 12:08:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.374 12:08:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:20.374 12:08:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.374 12:08:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:20.374 12:08:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:09:20.632 [2024-11-25 12:08:16.526102] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:09:20.632 [2024-11-25 12:08:16.526263] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57372 ] 00:09:20.632 [2024-11-25 12:08:16.710422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.890 [2024-11-25 12:08:16.868402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.826 12:08:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:21.826 12:08:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:09:21.826 12:08:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:21.826 12:08:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:21.826 12:08:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:09:21.826 12:08:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:21.826 12:08:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:21.826 12:08:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:21.826 12:08:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:21.826 12:08:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:21.826 12:08:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:21.826 12:08:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:21.826 12:08:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:21.826 12:08:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:09:21.826 12:08:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:21.827 [2024-11-25 12:08:17.885997] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:09:21.827 [2024-11-25 12:08:17.886170] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57401 ] 00:09:22.086 [2024-11-25 12:08:18.066969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.345 [2024-11-25 12:08:18.225064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:22.345 [2024-11-25 12:08:18.225208] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:09:22.345 [2024-11-25 12:08:18.225235] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:09:22.345 [2024-11-25 12:08:18.225257] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:22.604 12:08:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:09:22.604 12:08:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:22.604 12:08:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:09:22.604 12:08:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:09:22.604 12:08:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:09:22.604 12:08:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:22.604 12:08:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:22.604 12:08:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57372 00:09:22.604 12:08:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57372 ']' 00:09:22.604 12:08:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57372 00:09:22.604 12:08:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:09:22.604 12:08:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:22.604 12:08:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57372 00:09:22.604 killing process with pid 57372 00:09:22.604 12:08:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:22.604 12:08:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:22.604 12:08:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57372' 00:09:22.604 12:08:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57372 00:09:22.604 12:08:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57372 00:09:25.139 00:09:25.139 real 0m4.292s 00:09:25.139 user 0m4.760s 00:09:25.139 sys 0m0.650s 00:09:25.139 12:08:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:25.139 ************************************ 00:09:25.139 END TEST exit_on_failed_rpc_init 00:09:25.139 ************************************ 00:09:25.139 12:08:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:09:25.139 12:08:20 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:25.139 00:09:25.139 real 0m23.110s 00:09:25.139 user 0m22.123s 00:09:25.139 sys 0m2.379s 00:09:25.139 12:08:20 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:25.139 ************************************ 00:09:25.139 END TEST skip_rpc 00:09:25.139 ************************************ 00:09:25.139 12:08:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:25.139 12:08:20 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:25.139 12:08:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:25.139 12:08:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:25.139 12:08:20 -- common/autotest_common.sh@10 -- # set +x 00:09:25.139 ************************************ 00:09:25.139 START TEST rpc_client 00:09:25.139 ************************************ 00:09:25.139 12:08:20 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:25.139 * Looking for test storage... 00:09:25.139 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:09:25.139 12:08:20 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:25.139 12:08:20 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:09:25.139 12:08:20 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:25.139 12:08:20 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:25.139 12:08:20 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:25.139 12:08:20 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:25.139 12:08:20 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:25.139 12:08:20 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:09:25.139 12:08:20 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:09:25.139 12:08:20 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:09:25.139 12:08:20 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:09:25.139 12:08:20 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:09:25.139 12:08:20 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:09:25.139 12:08:20 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:09:25.140 12:08:20 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:25.140 12:08:20 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:09:25.140 12:08:20 rpc_client -- scripts/common.sh@345 -- # : 1 00:09:25.140 12:08:20 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:25.140 12:08:20 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:25.140 12:08:20 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:09:25.140 12:08:20 rpc_client -- scripts/common.sh@353 -- # local d=1 00:09:25.140 12:08:20 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:25.140 12:08:20 rpc_client -- scripts/common.sh@355 -- # echo 1 00:09:25.140 12:08:20 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:09:25.140 12:08:20 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:09:25.140 12:08:20 rpc_client -- scripts/common.sh@353 -- # local d=2 00:09:25.140 12:08:20 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:25.140 12:08:20 rpc_client -- scripts/common.sh@355 -- # echo 2 00:09:25.140 12:08:20 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:09:25.140 12:08:20 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:25.140 12:08:20 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:25.140 12:08:20 rpc_client -- scripts/common.sh@368 -- # return 0 00:09:25.140 12:08:20 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:25.140 12:08:20 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:25.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.140 --rc genhtml_branch_coverage=1 00:09:25.140 --rc genhtml_function_coverage=1 00:09:25.140 --rc genhtml_legend=1 00:09:25.140 --rc geninfo_all_blocks=1 00:09:25.140 --rc geninfo_unexecuted_blocks=1 00:09:25.140 00:09:25.140 ' 00:09:25.140 12:08:20 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:25.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.140 --rc genhtml_branch_coverage=1 00:09:25.140 --rc genhtml_function_coverage=1 00:09:25.140 --rc genhtml_legend=1 00:09:25.140 --rc geninfo_all_blocks=1 00:09:25.140 --rc geninfo_unexecuted_blocks=1 00:09:25.140 00:09:25.140 ' 00:09:25.140 12:08:20 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:25.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.140 --rc genhtml_branch_coverage=1 00:09:25.140 --rc genhtml_function_coverage=1 00:09:25.140 --rc genhtml_legend=1 00:09:25.140 --rc geninfo_all_blocks=1 00:09:25.140 --rc geninfo_unexecuted_blocks=1 00:09:25.140 00:09:25.140 ' 00:09:25.140 12:08:20 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:25.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.140 --rc genhtml_branch_coverage=1 00:09:25.140 --rc genhtml_function_coverage=1 00:09:25.140 --rc genhtml_legend=1 00:09:25.140 --rc geninfo_all_blocks=1 00:09:25.140 --rc geninfo_unexecuted_blocks=1 00:09:25.140 00:09:25.140 ' 00:09:25.140 12:08:20 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:09:25.140 OK 00:09:25.140 12:08:21 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:09:25.140 00:09:25.140 real 0m0.258s 00:09:25.140 user 0m0.137s 00:09:25.140 sys 0m0.128s 00:09:25.140 12:08:21 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:25.140 12:08:21 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:09:25.140 ************************************ 00:09:25.140 END TEST rpc_client 00:09:25.140 ************************************ 00:09:25.140 12:08:21 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:25.140 12:08:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:25.140 12:08:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:25.140 12:08:21 -- common/autotest_common.sh@10 -- # set +x 00:09:25.140 ************************************ 00:09:25.140 START TEST json_config 00:09:25.140 ************************************ 00:09:25.140 12:08:21 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:25.140 12:08:21 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:25.140 12:08:21 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:09:25.140 12:08:21 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:25.400 12:08:21 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:25.400 12:08:21 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:25.400 12:08:21 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:25.400 12:08:21 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:25.400 12:08:21 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:09:25.400 12:08:21 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:09:25.400 12:08:21 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:09:25.400 12:08:21 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:09:25.400 12:08:21 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:09:25.400 12:08:21 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:09:25.400 12:08:21 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:09:25.400 12:08:21 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:25.400 12:08:21 json_config -- scripts/common.sh@344 -- # case "$op" in 00:09:25.400 12:08:21 json_config -- scripts/common.sh@345 -- # : 1 00:09:25.400 12:08:21 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:25.400 12:08:21 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:25.400 12:08:21 json_config -- scripts/common.sh@365 -- # decimal 1 00:09:25.400 12:08:21 json_config -- scripts/common.sh@353 -- # local d=1 00:09:25.400 12:08:21 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:25.400 12:08:21 json_config -- scripts/common.sh@355 -- # echo 1 00:09:25.400 12:08:21 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:09:25.400 12:08:21 json_config -- scripts/common.sh@366 -- # decimal 2 00:09:25.400 12:08:21 json_config -- scripts/common.sh@353 -- # local d=2 00:09:25.400 12:08:21 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:25.400 12:08:21 json_config -- scripts/common.sh@355 -- # echo 2 00:09:25.400 12:08:21 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:09:25.400 12:08:21 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:25.400 12:08:21 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:25.400 12:08:21 json_config -- scripts/common.sh@368 -- # return 0 00:09:25.400 12:08:21 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:25.400 12:08:21 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:25.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.400 --rc genhtml_branch_coverage=1 00:09:25.400 --rc genhtml_function_coverage=1 00:09:25.400 --rc genhtml_legend=1 00:09:25.400 --rc geninfo_all_blocks=1 00:09:25.400 --rc geninfo_unexecuted_blocks=1 00:09:25.400 00:09:25.400 ' 00:09:25.400 12:08:21 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:25.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.400 --rc genhtml_branch_coverage=1 00:09:25.400 --rc genhtml_function_coverage=1 00:09:25.400 --rc genhtml_legend=1 00:09:25.400 --rc geninfo_all_blocks=1 00:09:25.400 --rc geninfo_unexecuted_blocks=1 00:09:25.400 00:09:25.400 ' 00:09:25.400 12:08:21 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:25.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.400 --rc genhtml_branch_coverage=1 00:09:25.400 --rc genhtml_function_coverage=1 00:09:25.400 --rc genhtml_legend=1 00:09:25.400 --rc geninfo_all_blocks=1 00:09:25.400 --rc geninfo_unexecuted_blocks=1 00:09:25.400 00:09:25.400 ' 00:09:25.400 12:08:21 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:25.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.400 --rc genhtml_branch_coverage=1 00:09:25.400 --rc genhtml_function_coverage=1 00:09:25.400 --rc genhtml_legend=1 00:09:25.400 --rc geninfo_all_blocks=1 00:09:25.400 --rc geninfo_unexecuted_blocks=1 00:09:25.400 00:09:25.400 ' 00:09:25.400 12:08:21 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:25.400 12:08:21 json_config -- nvmf/common.sh@7 -- # uname -s 00:09:25.400 12:08:21 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:25.400 12:08:21 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:25.400 12:08:21 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:25.400 12:08:21 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:25.400 12:08:21 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:25.400 12:08:21 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:25.400 12:08:21 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:25.400 12:08:21 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:25.400 12:08:21 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:25.400 12:08:21 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:25.400 12:08:21 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ef1d303-9415-4390-8cec-f584d6dbee6a 00:09:25.400 12:08:21 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=2ef1d303-9415-4390-8cec-f584d6dbee6a 00:09:25.400 12:08:21 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:25.400 12:08:21 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:25.400 12:08:21 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:25.400 12:08:21 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:25.400 12:08:21 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:25.400 12:08:21 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:09:25.400 12:08:21 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:25.400 12:08:21 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:25.400 12:08:21 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:25.400 12:08:21 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.400 12:08:21 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.400 12:08:21 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.400 12:08:21 json_config -- paths/export.sh@5 -- # export PATH 00:09:25.401 12:08:21 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.401 12:08:21 json_config -- nvmf/common.sh@51 -- # : 0 00:09:25.401 12:08:21 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:25.401 12:08:21 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:25.401 12:08:21 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:25.401 12:08:21 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:25.401 12:08:21 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:25.401 12:08:21 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:25.401 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:25.401 12:08:21 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:25.401 12:08:21 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:25.401 12:08:21 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:25.401 WARNING: No tests are enabled so not running JSON configuration tests 00:09:25.401 12:08:21 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:09:25.401 12:08:21 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:09:25.401 12:08:21 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:09:25.401 12:08:21 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:09:25.401 12:08:21 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:09:25.401 12:08:21 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:09:25.401 12:08:21 json_config -- json_config/json_config.sh@28 -- # exit 0 00:09:25.401 ************************************ 00:09:25.401 END TEST json_config 00:09:25.401 ************************************ 00:09:25.401 00:09:25.401 real 0m0.194s 00:09:25.401 user 0m0.123s 00:09:25.401 sys 0m0.072s 00:09:25.401 12:08:21 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:25.401 12:08:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:25.401 12:08:21 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:09:25.401 12:08:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:25.401 12:08:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:25.401 12:08:21 -- common/autotest_common.sh@10 -- # set +x 00:09:25.401 ************************************ 00:09:25.401 START TEST json_config_extra_key 00:09:25.401 ************************************ 00:09:25.401 12:08:21 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:09:25.401 12:08:21 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:25.401 12:08:21 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:09:25.401 12:08:21 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:25.661 12:08:21 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:25.661 12:08:21 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:25.661 12:08:21 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:25.661 12:08:21 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:25.661 12:08:21 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:09:25.661 12:08:21 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:09:25.661 12:08:21 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:09:25.661 12:08:21 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:09:25.661 12:08:21 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:09:25.661 12:08:21 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:09:25.661 12:08:21 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:09:25.661 12:08:21 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:25.661 12:08:21 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:09:25.661 12:08:21 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:09:25.661 12:08:21 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:25.661 12:08:21 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:25.661 12:08:21 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:09:25.661 12:08:21 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:09:25.661 12:08:21 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:25.661 12:08:21 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:09:25.661 12:08:21 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:09:25.661 12:08:21 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:09:25.661 12:08:21 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:09:25.661 12:08:21 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:25.661 12:08:21 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:09:25.661 12:08:21 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:09:25.661 12:08:21 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:25.661 12:08:21 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:25.661 12:08:21 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:09:25.661 12:08:21 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:25.661 12:08:21 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:25.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.661 --rc genhtml_branch_coverage=1 00:09:25.661 --rc genhtml_function_coverage=1 00:09:25.661 --rc genhtml_legend=1 00:09:25.661 --rc geninfo_all_blocks=1 00:09:25.661 --rc geninfo_unexecuted_blocks=1 00:09:25.661 00:09:25.661 ' 00:09:25.661 12:08:21 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:25.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.661 --rc genhtml_branch_coverage=1 00:09:25.661 --rc genhtml_function_coverage=1 00:09:25.661 --rc genhtml_legend=1 00:09:25.661 --rc geninfo_all_blocks=1 00:09:25.661 --rc geninfo_unexecuted_blocks=1 00:09:25.661 00:09:25.661 ' 00:09:25.661 12:08:21 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:25.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.661 --rc genhtml_branch_coverage=1 00:09:25.661 --rc genhtml_function_coverage=1 00:09:25.661 --rc genhtml_legend=1 00:09:25.661 --rc geninfo_all_blocks=1 00:09:25.661 --rc geninfo_unexecuted_blocks=1 00:09:25.661 00:09:25.661 ' 00:09:25.661 12:08:21 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:25.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.661 --rc genhtml_branch_coverage=1 00:09:25.661 --rc genhtml_function_coverage=1 00:09:25.661 --rc genhtml_legend=1 00:09:25.661 --rc geninfo_all_blocks=1 00:09:25.661 --rc geninfo_unexecuted_blocks=1 00:09:25.661 00:09:25.661 ' 00:09:25.661 12:08:21 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:25.661 12:08:21 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:09:25.661 12:08:21 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:25.661 12:08:21 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:25.661 12:08:21 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:25.661 12:08:21 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:25.661 12:08:21 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:25.661 12:08:21 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:25.661 12:08:21 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:25.661 12:08:21 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:25.662 12:08:21 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:25.662 12:08:21 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:25.662 12:08:21 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ef1d303-9415-4390-8cec-f584d6dbee6a 00:09:25.662 12:08:21 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=2ef1d303-9415-4390-8cec-f584d6dbee6a 00:09:25.662 12:08:21 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:25.662 12:08:21 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:25.662 12:08:21 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:25.662 12:08:21 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:25.662 12:08:21 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:25.662 12:08:21 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:09:25.662 12:08:21 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:25.662 12:08:21 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:25.662 12:08:21 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:25.662 12:08:21 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.662 12:08:21 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.662 12:08:21 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.662 12:08:21 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:09:25.662 12:08:21 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.662 12:08:21 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:09:25.662 12:08:21 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:25.662 12:08:21 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:25.662 12:08:21 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:25.662 12:08:21 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:25.662 12:08:21 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:25.662 12:08:21 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:25.662 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:25.662 12:08:21 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:25.662 12:08:21 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:25.662 12:08:21 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:25.662 12:08:21 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:09:25.662 INFO: launching applications... 00:09:25.662 Waiting for target to run... 00:09:25.662 12:08:21 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:09:25.662 12:08:21 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:09:25.662 12:08:21 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:09:25.662 12:08:21 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:09:25.662 12:08:21 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:09:25.662 12:08:21 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:09:25.662 12:08:21 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:09:25.662 12:08:21 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:09:25.662 12:08:21 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:25.662 12:08:21 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:09:25.662 12:08:21 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:09:25.662 12:08:21 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:09:25.662 12:08:21 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:09:25.662 12:08:21 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:25.662 12:08:21 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:25.662 12:08:21 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:09:25.662 12:08:21 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:25.662 12:08:21 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:25.662 12:08:21 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57600 00:09:25.662 12:08:21 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:25.662 12:08:21 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57600 /var/tmp/spdk_tgt.sock 00:09:25.662 12:08:21 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:09:25.662 12:08:21 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57600 ']' 00:09:25.662 12:08:21 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:25.662 12:08:21 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:25.662 12:08:21 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:25.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:25.662 12:08:21 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:25.662 12:08:21 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:09:25.662 [2024-11-25 12:08:21.666965] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:09:25.662 [2024-11-25 12:08:21.667421] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57600 ] 00:09:26.230 [2024-11-25 12:08:22.150027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.230 [2024-11-25 12:08:22.289088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.168 12:08:22 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:27.168 12:08:22 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:09:27.168 12:08:22 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:09:27.168 00:09:27.168 12:08:22 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:09:27.168 INFO: shutting down applications... 00:09:27.168 12:08:22 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:09:27.168 12:08:22 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:09:27.168 12:08:22 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:09:27.168 12:08:22 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57600 ]] 00:09:27.168 12:08:22 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57600 00:09:27.168 12:08:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:09:27.168 12:08:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:27.168 12:08:22 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57600 00:09:27.168 12:08:22 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:27.428 12:08:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:27.428 12:08:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:27.428 12:08:23 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57600 00:09:27.428 12:08:23 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:27.997 12:08:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:27.997 12:08:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:27.997 12:08:23 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57600 00:09:27.997 12:08:23 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:28.565 12:08:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:28.565 12:08:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:28.565 12:08:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57600 00:09:28.565 12:08:24 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:29.134 12:08:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:29.134 12:08:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:29.134 12:08:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57600 00:09:29.134 12:08:24 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:29.703 12:08:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:29.703 12:08:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:29.703 12:08:25 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57600 00:09:29.703 12:08:25 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:29.962 12:08:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:29.962 12:08:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:29.962 SPDK target shutdown done 00:09:29.962 Success 00:09:29.962 12:08:26 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57600 00:09:29.962 12:08:26 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:09:29.962 12:08:26 json_config_extra_key -- json_config/common.sh@43 -- # break 00:09:29.962 12:08:26 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:09:29.962 12:08:26 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:09:29.962 12:08:26 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:09:29.962 ************************************ 00:09:29.962 END TEST json_config_extra_key 00:09:29.962 ************************************ 00:09:29.962 00:09:29.962 real 0m4.642s 00:09:29.962 user 0m3.949s 00:09:29.962 sys 0m0.648s 00:09:29.962 12:08:26 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:29.962 12:08:26 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:09:29.962 12:08:26 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:29.962 12:08:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:29.962 12:08:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.962 12:08:26 -- common/autotest_common.sh@10 -- # set +x 00:09:30.221 ************************************ 00:09:30.221 START TEST alias_rpc 00:09:30.221 ************************************ 00:09:30.221 12:08:26 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:30.221 * Looking for test storage... 00:09:30.221 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:09:30.221 12:08:26 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:30.221 12:08:26 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:30.221 12:08:26 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:30.221 12:08:26 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:30.221 12:08:26 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:30.221 12:08:26 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:30.221 12:08:26 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:30.221 12:08:26 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:30.221 12:08:26 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:30.221 12:08:26 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:30.221 12:08:26 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:30.221 12:08:26 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:30.221 12:08:26 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:30.221 12:08:26 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:30.221 12:08:26 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:30.221 12:08:26 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:30.221 12:08:26 alias_rpc -- scripts/common.sh@345 -- # : 1 00:09:30.221 12:08:26 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:30.221 12:08:26 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:30.221 12:08:26 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:30.221 12:08:26 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:09:30.221 12:08:26 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:30.221 12:08:26 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:09:30.221 12:08:26 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:30.221 12:08:26 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:30.221 12:08:26 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:09:30.221 12:08:26 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:30.221 12:08:26 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:09:30.221 12:08:26 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:30.221 12:08:26 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:30.221 12:08:26 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:30.221 12:08:26 alias_rpc -- scripts/common.sh@368 -- # return 0 00:09:30.221 12:08:26 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:30.222 12:08:26 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:30.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.222 --rc genhtml_branch_coverage=1 00:09:30.222 --rc genhtml_function_coverage=1 00:09:30.222 --rc genhtml_legend=1 00:09:30.222 --rc geninfo_all_blocks=1 00:09:30.222 --rc geninfo_unexecuted_blocks=1 00:09:30.222 00:09:30.222 ' 00:09:30.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.222 12:08:26 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:30.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.222 --rc genhtml_branch_coverage=1 00:09:30.222 --rc genhtml_function_coverage=1 00:09:30.222 --rc genhtml_legend=1 00:09:30.222 --rc geninfo_all_blocks=1 00:09:30.222 --rc geninfo_unexecuted_blocks=1 00:09:30.222 00:09:30.222 ' 00:09:30.222 12:08:26 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:30.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.222 --rc genhtml_branch_coverage=1 00:09:30.222 --rc genhtml_function_coverage=1 00:09:30.222 --rc genhtml_legend=1 00:09:30.222 --rc geninfo_all_blocks=1 00:09:30.222 --rc geninfo_unexecuted_blocks=1 00:09:30.222 00:09:30.222 ' 00:09:30.222 12:08:26 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:30.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.222 --rc genhtml_branch_coverage=1 00:09:30.222 --rc genhtml_function_coverage=1 00:09:30.222 --rc genhtml_legend=1 00:09:30.222 --rc geninfo_all_blocks=1 00:09:30.222 --rc geninfo_unexecuted_blocks=1 00:09:30.222 00:09:30.222 ' 00:09:30.222 12:08:26 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:30.222 12:08:26 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57717 00:09:30.222 12:08:26 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57717 00:09:30.222 12:08:26 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:30.222 12:08:26 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57717 ']' 00:09:30.222 12:08:26 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.222 12:08:26 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:30.222 12:08:26 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.222 12:08:26 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:30.222 12:08:26 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.480 [2024-11-25 12:08:26.373742] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:09:30.480 [2024-11-25 12:08:26.374474] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57717 ] 00:09:30.480 [2024-11-25 12:08:26.562165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.755 [2024-11-25 12:08:26.689817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.700 12:08:27 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:31.700 12:08:27 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:31.700 12:08:27 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:09:31.958 12:08:27 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57717 00:09:31.958 12:08:27 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57717 ']' 00:09:31.958 12:08:27 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57717 00:09:31.958 12:08:27 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:09:31.958 12:08:27 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:31.958 12:08:27 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57717 00:09:31.958 killing process with pid 57717 00:09:31.958 12:08:27 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:31.958 12:08:27 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:31.958 12:08:27 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57717' 00:09:31.958 12:08:27 alias_rpc -- common/autotest_common.sh@973 -- # kill 57717 00:09:31.958 12:08:27 alias_rpc -- common/autotest_common.sh@978 -- # wait 57717 00:09:34.492 ************************************ 00:09:34.492 END TEST alias_rpc 00:09:34.492 ************************************ 00:09:34.492 00:09:34.492 real 0m4.016s 00:09:34.492 user 0m4.171s 00:09:34.492 sys 0m0.625s 00:09:34.492 12:08:30 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:34.492 12:08:30 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:34.492 12:08:30 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:09:34.492 12:08:30 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:09:34.492 12:08:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:34.492 12:08:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:34.492 12:08:30 -- common/autotest_common.sh@10 -- # set +x 00:09:34.492 ************************************ 00:09:34.492 START TEST spdkcli_tcp 00:09:34.492 ************************************ 00:09:34.492 12:08:30 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:09:34.492 * Looking for test storage... 00:09:34.492 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:09:34.492 12:08:30 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:34.492 12:08:30 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:09:34.492 12:08:30 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:34.492 12:08:30 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:34.492 12:08:30 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:34.492 12:08:30 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:34.492 12:08:30 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:34.492 12:08:30 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:09:34.492 12:08:30 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:09:34.492 12:08:30 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:09:34.492 12:08:30 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:09:34.492 12:08:30 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:09:34.492 12:08:30 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:09:34.492 12:08:30 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:09:34.492 12:08:30 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:34.492 12:08:30 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:09:34.492 12:08:30 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:09:34.492 12:08:30 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:34.492 12:08:30 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:34.492 12:08:30 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:09:34.492 12:08:30 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:09:34.492 12:08:30 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:34.492 12:08:30 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:09:34.492 12:08:30 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:34.492 12:08:30 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:09:34.492 12:08:30 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:09:34.492 12:08:30 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:34.492 12:08:30 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:09:34.492 12:08:30 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:34.492 12:08:30 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:34.492 12:08:30 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:34.492 12:08:30 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:09:34.492 12:08:30 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:34.492 12:08:30 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:34.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.492 --rc genhtml_branch_coverage=1 00:09:34.492 --rc genhtml_function_coverage=1 00:09:34.492 --rc genhtml_legend=1 00:09:34.492 --rc geninfo_all_blocks=1 00:09:34.492 --rc geninfo_unexecuted_blocks=1 00:09:34.492 00:09:34.492 ' 00:09:34.492 12:08:30 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:34.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.492 --rc genhtml_branch_coverage=1 00:09:34.492 --rc genhtml_function_coverage=1 00:09:34.492 --rc genhtml_legend=1 00:09:34.492 --rc geninfo_all_blocks=1 00:09:34.492 --rc geninfo_unexecuted_blocks=1 00:09:34.492 00:09:34.492 ' 00:09:34.492 12:08:30 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:34.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.492 --rc genhtml_branch_coverage=1 00:09:34.492 --rc genhtml_function_coverage=1 00:09:34.492 --rc genhtml_legend=1 00:09:34.492 --rc geninfo_all_blocks=1 00:09:34.492 --rc geninfo_unexecuted_blocks=1 00:09:34.492 00:09:34.492 ' 00:09:34.492 12:08:30 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:34.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.492 --rc genhtml_branch_coverage=1 00:09:34.492 --rc genhtml_function_coverage=1 00:09:34.492 --rc genhtml_legend=1 00:09:34.492 --rc geninfo_all_blocks=1 00:09:34.492 --rc geninfo_unexecuted_blocks=1 00:09:34.492 00:09:34.492 ' 00:09:34.492 12:08:30 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:09:34.492 12:08:30 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:09:34.492 12:08:30 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:09:34.492 12:08:30 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:09:34.492 12:08:30 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:09:34.492 12:08:30 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:09:34.492 12:08:30 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:09:34.492 12:08:30 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:34.493 12:08:30 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:34.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.493 12:08:30 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57824 00:09:34.493 12:08:30 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57824 00:09:34.493 12:08:30 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57824 ']' 00:09:34.493 12:08:30 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:09:34.493 12:08:30 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.493 12:08:30 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:34.493 12:08:30 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.493 12:08:30 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:34.493 12:08:30 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:34.493 [2024-11-25 12:08:30.449810] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:09:34.493 [2024-11-25 12:08:30.450022] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57824 ] 00:09:34.751 [2024-11-25 12:08:30.646063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:34.751 [2024-11-25 12:08:30.781432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.751 [2024-11-25 12:08:30.781444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:35.686 12:08:31 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:35.686 12:08:31 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:09:35.686 12:08:31 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57841 00:09:35.686 12:08:31 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:09:35.686 12:08:31 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:09:35.945 [ 00:09:35.945 "bdev_malloc_delete", 00:09:35.945 "bdev_malloc_create", 00:09:35.945 "bdev_null_resize", 00:09:35.945 "bdev_null_delete", 00:09:35.945 "bdev_null_create", 00:09:35.945 "bdev_nvme_cuse_unregister", 00:09:35.945 "bdev_nvme_cuse_register", 00:09:35.945 "bdev_opal_new_user", 00:09:35.945 "bdev_opal_set_lock_state", 00:09:35.945 "bdev_opal_delete", 00:09:35.945 "bdev_opal_get_info", 00:09:35.945 "bdev_opal_create", 00:09:35.945 "bdev_nvme_opal_revert", 00:09:35.945 "bdev_nvme_opal_init", 00:09:35.945 "bdev_nvme_send_cmd", 00:09:35.945 "bdev_nvme_set_keys", 00:09:35.945 "bdev_nvme_get_path_iostat", 00:09:35.945 "bdev_nvme_get_mdns_discovery_info", 00:09:35.945 "bdev_nvme_stop_mdns_discovery", 00:09:35.945 "bdev_nvme_start_mdns_discovery", 00:09:35.945 "bdev_nvme_set_multipath_policy", 00:09:35.945 "bdev_nvme_set_preferred_path", 00:09:35.945 "bdev_nvme_get_io_paths", 00:09:35.945 "bdev_nvme_remove_error_injection", 00:09:35.945 "bdev_nvme_add_error_injection", 00:09:35.945 "bdev_nvme_get_discovery_info", 00:09:35.945 "bdev_nvme_stop_discovery", 00:09:35.945 "bdev_nvme_start_discovery", 00:09:35.945 "bdev_nvme_get_controller_health_info", 00:09:35.945 "bdev_nvme_disable_controller", 00:09:35.945 "bdev_nvme_enable_controller", 00:09:35.945 "bdev_nvme_reset_controller", 00:09:35.945 "bdev_nvme_get_transport_statistics", 00:09:35.945 "bdev_nvme_apply_firmware", 00:09:35.945 "bdev_nvme_detach_controller", 00:09:35.945 "bdev_nvme_get_controllers", 00:09:35.945 "bdev_nvme_attach_controller", 00:09:35.945 "bdev_nvme_set_hotplug", 00:09:35.945 "bdev_nvme_set_options", 00:09:35.945 "bdev_passthru_delete", 00:09:35.945 "bdev_passthru_create", 00:09:35.945 "bdev_lvol_set_parent_bdev", 00:09:35.945 "bdev_lvol_set_parent", 00:09:35.945 "bdev_lvol_check_shallow_copy", 00:09:35.945 "bdev_lvol_start_shallow_copy", 00:09:35.945 "bdev_lvol_grow_lvstore", 00:09:35.945 "bdev_lvol_get_lvols", 00:09:35.945 "bdev_lvol_get_lvstores", 00:09:35.945 "bdev_lvol_delete", 00:09:35.945 "bdev_lvol_set_read_only", 00:09:35.945 "bdev_lvol_resize", 00:09:35.945 "bdev_lvol_decouple_parent", 00:09:35.945 "bdev_lvol_inflate", 00:09:35.945 "bdev_lvol_rename", 00:09:35.945 "bdev_lvol_clone_bdev", 00:09:35.945 "bdev_lvol_clone", 00:09:35.945 "bdev_lvol_snapshot", 00:09:35.945 "bdev_lvol_create", 00:09:35.945 "bdev_lvol_delete_lvstore", 00:09:35.945 "bdev_lvol_rename_lvstore", 00:09:35.945 "bdev_lvol_create_lvstore", 00:09:35.945 "bdev_raid_set_options", 00:09:35.945 "bdev_raid_remove_base_bdev", 00:09:35.945 "bdev_raid_add_base_bdev", 00:09:35.945 "bdev_raid_delete", 00:09:35.945 "bdev_raid_create", 00:09:35.945 "bdev_raid_get_bdevs", 00:09:35.945 "bdev_error_inject_error", 00:09:35.945 "bdev_error_delete", 00:09:35.945 "bdev_error_create", 00:09:35.945 "bdev_split_delete", 00:09:35.945 "bdev_split_create", 00:09:35.945 "bdev_delay_delete", 00:09:35.945 "bdev_delay_create", 00:09:35.945 "bdev_delay_update_latency", 00:09:35.945 "bdev_zone_block_delete", 00:09:35.945 "bdev_zone_block_create", 00:09:35.945 "blobfs_create", 00:09:35.945 "blobfs_detect", 00:09:35.945 "blobfs_set_cache_size", 00:09:35.945 "bdev_aio_delete", 00:09:35.945 "bdev_aio_rescan", 00:09:35.945 "bdev_aio_create", 00:09:35.945 "bdev_ftl_set_property", 00:09:35.945 "bdev_ftl_get_properties", 00:09:35.945 "bdev_ftl_get_stats", 00:09:35.945 "bdev_ftl_unmap", 00:09:35.945 "bdev_ftl_unload", 00:09:35.945 "bdev_ftl_delete", 00:09:35.945 "bdev_ftl_load", 00:09:35.945 "bdev_ftl_create", 00:09:35.945 "bdev_virtio_attach_controller", 00:09:35.945 "bdev_virtio_scsi_get_devices", 00:09:35.945 "bdev_virtio_detach_controller", 00:09:35.945 "bdev_virtio_blk_set_hotplug", 00:09:35.945 "bdev_iscsi_delete", 00:09:35.945 "bdev_iscsi_create", 00:09:35.945 "bdev_iscsi_set_options", 00:09:35.945 "accel_error_inject_error", 00:09:35.945 "ioat_scan_accel_module", 00:09:35.945 "dsa_scan_accel_module", 00:09:35.945 "iaa_scan_accel_module", 00:09:35.945 "keyring_file_remove_key", 00:09:35.945 "keyring_file_add_key", 00:09:35.945 "keyring_linux_set_options", 00:09:35.945 "fsdev_aio_delete", 00:09:35.945 "fsdev_aio_create", 00:09:35.945 "iscsi_get_histogram", 00:09:35.945 "iscsi_enable_histogram", 00:09:35.945 "iscsi_set_options", 00:09:35.945 "iscsi_get_auth_groups", 00:09:35.945 "iscsi_auth_group_remove_secret", 00:09:35.945 "iscsi_auth_group_add_secret", 00:09:35.945 "iscsi_delete_auth_group", 00:09:35.945 "iscsi_create_auth_group", 00:09:35.945 "iscsi_set_discovery_auth", 00:09:35.945 "iscsi_get_options", 00:09:35.945 "iscsi_target_node_request_logout", 00:09:35.945 "iscsi_target_node_set_redirect", 00:09:35.945 "iscsi_target_node_set_auth", 00:09:35.945 "iscsi_target_node_add_lun", 00:09:35.945 "iscsi_get_stats", 00:09:35.945 "iscsi_get_connections", 00:09:35.945 "iscsi_portal_group_set_auth", 00:09:35.945 "iscsi_start_portal_group", 00:09:35.945 "iscsi_delete_portal_group", 00:09:35.945 "iscsi_create_portal_group", 00:09:35.945 "iscsi_get_portal_groups", 00:09:35.945 "iscsi_delete_target_node", 00:09:35.945 "iscsi_target_node_remove_pg_ig_maps", 00:09:35.945 "iscsi_target_node_add_pg_ig_maps", 00:09:35.945 "iscsi_create_target_node", 00:09:35.945 "iscsi_get_target_nodes", 00:09:35.945 "iscsi_delete_initiator_group", 00:09:35.945 "iscsi_initiator_group_remove_initiators", 00:09:35.945 "iscsi_initiator_group_add_initiators", 00:09:35.945 "iscsi_create_initiator_group", 00:09:35.945 "iscsi_get_initiator_groups", 00:09:35.945 "nvmf_set_crdt", 00:09:35.945 "nvmf_set_config", 00:09:35.945 "nvmf_set_max_subsystems", 00:09:35.945 "nvmf_stop_mdns_prr", 00:09:35.945 "nvmf_publish_mdns_prr", 00:09:35.945 "nvmf_subsystem_get_listeners", 00:09:35.945 "nvmf_subsystem_get_qpairs", 00:09:35.945 "nvmf_subsystem_get_controllers", 00:09:35.945 "nvmf_get_stats", 00:09:35.945 "nvmf_get_transports", 00:09:35.945 "nvmf_create_transport", 00:09:35.945 "nvmf_get_targets", 00:09:35.945 "nvmf_delete_target", 00:09:35.945 "nvmf_create_target", 00:09:35.945 "nvmf_subsystem_allow_any_host", 00:09:35.945 "nvmf_subsystem_set_keys", 00:09:35.945 "nvmf_subsystem_remove_host", 00:09:35.945 "nvmf_subsystem_add_host", 00:09:35.945 "nvmf_ns_remove_host", 00:09:35.945 "nvmf_ns_add_host", 00:09:35.945 "nvmf_subsystem_remove_ns", 00:09:35.945 "nvmf_subsystem_set_ns_ana_group", 00:09:35.945 "nvmf_subsystem_add_ns", 00:09:35.945 "nvmf_subsystem_listener_set_ana_state", 00:09:35.945 "nvmf_discovery_get_referrals", 00:09:35.945 "nvmf_discovery_remove_referral", 00:09:35.945 "nvmf_discovery_add_referral", 00:09:35.945 "nvmf_subsystem_remove_listener", 00:09:35.945 "nvmf_subsystem_add_listener", 00:09:35.945 "nvmf_delete_subsystem", 00:09:35.945 "nvmf_create_subsystem", 00:09:35.945 "nvmf_get_subsystems", 00:09:35.945 "env_dpdk_get_mem_stats", 00:09:35.945 "nbd_get_disks", 00:09:35.945 "nbd_stop_disk", 00:09:35.945 "nbd_start_disk", 00:09:35.945 "ublk_recover_disk", 00:09:35.945 "ublk_get_disks", 00:09:35.945 "ublk_stop_disk", 00:09:35.945 "ublk_start_disk", 00:09:35.945 "ublk_destroy_target", 00:09:35.945 "ublk_create_target", 00:09:35.945 "virtio_blk_create_transport", 00:09:35.945 "virtio_blk_get_transports", 00:09:35.945 "vhost_controller_set_coalescing", 00:09:35.945 "vhost_get_controllers", 00:09:35.945 "vhost_delete_controller", 00:09:35.945 "vhost_create_blk_controller", 00:09:35.945 "vhost_scsi_controller_remove_target", 00:09:35.945 "vhost_scsi_controller_add_target", 00:09:35.945 "vhost_start_scsi_controller", 00:09:35.945 "vhost_create_scsi_controller", 00:09:35.945 "thread_set_cpumask", 00:09:35.945 "scheduler_set_options", 00:09:35.945 "framework_get_governor", 00:09:35.945 "framework_get_scheduler", 00:09:35.945 "framework_set_scheduler", 00:09:35.945 "framework_get_reactors", 00:09:35.945 "thread_get_io_channels", 00:09:35.945 "thread_get_pollers", 00:09:35.945 "thread_get_stats", 00:09:35.945 "framework_monitor_context_switch", 00:09:35.945 "spdk_kill_instance", 00:09:35.945 "log_enable_timestamps", 00:09:35.945 "log_get_flags", 00:09:35.945 "log_clear_flag", 00:09:35.946 "log_set_flag", 00:09:35.946 "log_get_level", 00:09:35.946 "log_set_level", 00:09:35.946 "log_get_print_level", 00:09:35.946 "log_set_print_level", 00:09:35.946 "framework_enable_cpumask_locks", 00:09:35.946 "framework_disable_cpumask_locks", 00:09:35.946 "framework_wait_init", 00:09:35.946 "framework_start_init", 00:09:35.946 "scsi_get_devices", 00:09:35.946 "bdev_get_histogram", 00:09:35.946 "bdev_enable_histogram", 00:09:35.946 "bdev_set_qos_limit", 00:09:35.946 "bdev_set_qd_sampling_period", 00:09:35.946 "bdev_get_bdevs", 00:09:35.946 "bdev_reset_iostat", 00:09:35.946 "bdev_get_iostat", 00:09:35.946 "bdev_examine", 00:09:35.946 "bdev_wait_for_examine", 00:09:35.946 "bdev_set_options", 00:09:35.946 "accel_get_stats", 00:09:35.946 "accel_set_options", 00:09:35.946 "accel_set_driver", 00:09:35.946 "accel_crypto_key_destroy", 00:09:35.946 "accel_crypto_keys_get", 00:09:35.946 "accel_crypto_key_create", 00:09:35.946 "accel_assign_opc", 00:09:35.946 "accel_get_module_info", 00:09:35.946 "accel_get_opc_assignments", 00:09:35.946 "vmd_rescan", 00:09:35.946 "vmd_remove_device", 00:09:35.946 "vmd_enable", 00:09:35.946 "sock_get_default_impl", 00:09:35.946 "sock_set_default_impl", 00:09:35.946 "sock_impl_set_options", 00:09:35.946 "sock_impl_get_options", 00:09:35.946 "iobuf_get_stats", 00:09:35.946 "iobuf_set_options", 00:09:35.946 "keyring_get_keys", 00:09:35.946 "framework_get_pci_devices", 00:09:35.946 "framework_get_config", 00:09:35.946 "framework_get_subsystems", 00:09:35.946 "fsdev_set_opts", 00:09:35.946 "fsdev_get_opts", 00:09:35.946 "trace_get_info", 00:09:35.946 "trace_get_tpoint_group_mask", 00:09:35.946 "trace_disable_tpoint_group", 00:09:35.946 "trace_enable_tpoint_group", 00:09:35.946 "trace_clear_tpoint_mask", 00:09:35.946 "trace_set_tpoint_mask", 00:09:35.946 "notify_get_notifications", 00:09:35.946 "notify_get_types", 00:09:35.946 "spdk_get_version", 00:09:35.946 "rpc_get_methods" 00:09:35.946 ] 00:09:35.946 12:08:31 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:09:35.946 12:08:31 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:35.946 12:08:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:35.946 12:08:31 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:35.946 12:08:31 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57824 00:09:35.946 12:08:31 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57824 ']' 00:09:35.946 12:08:31 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57824 00:09:35.946 12:08:31 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:09:35.946 12:08:31 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:35.946 12:08:31 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57824 00:09:35.946 killing process with pid 57824 00:09:35.946 12:08:32 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:35.946 12:08:32 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:35.946 12:08:32 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57824' 00:09:35.946 12:08:32 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57824 00:09:35.946 12:08:32 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57824 00:09:38.480 ************************************ 00:09:38.480 END TEST spdkcli_tcp 00:09:38.480 ************************************ 00:09:38.480 00:09:38.480 real 0m4.146s 00:09:38.480 user 0m7.429s 00:09:38.480 sys 0m0.707s 00:09:38.480 12:08:34 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:38.480 12:08:34 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:38.480 12:08:34 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:38.480 12:08:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:38.480 12:08:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:38.480 12:08:34 -- common/autotest_common.sh@10 -- # set +x 00:09:38.480 ************************************ 00:09:38.480 START TEST dpdk_mem_utility 00:09:38.480 ************************************ 00:09:38.480 12:08:34 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:38.480 * Looking for test storage... 00:09:38.480 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:09:38.480 12:08:34 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:38.480 12:08:34 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:38.480 12:08:34 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:09:38.480 12:08:34 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:38.480 12:08:34 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:38.480 12:08:34 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:38.480 12:08:34 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:38.480 12:08:34 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:09:38.480 12:08:34 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:09:38.480 12:08:34 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:09:38.480 12:08:34 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:09:38.480 12:08:34 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:09:38.480 12:08:34 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:09:38.480 12:08:34 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:09:38.480 12:08:34 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:38.480 12:08:34 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:09:38.480 12:08:34 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:09:38.480 12:08:34 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:38.480 12:08:34 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:38.480 12:08:34 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:09:38.480 12:08:34 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:09:38.480 12:08:34 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:38.480 12:08:34 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:09:38.480 12:08:34 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:09:38.480 12:08:34 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:09:38.480 12:08:34 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:09:38.480 12:08:34 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:38.480 12:08:34 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:09:38.480 12:08:34 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:09:38.480 12:08:34 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:38.480 12:08:34 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:38.480 12:08:34 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:09:38.480 12:08:34 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:38.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.480 12:08:34 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:38.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.480 --rc genhtml_branch_coverage=1 00:09:38.480 --rc genhtml_function_coverage=1 00:09:38.480 --rc genhtml_legend=1 00:09:38.480 --rc geninfo_all_blocks=1 00:09:38.480 --rc geninfo_unexecuted_blocks=1 00:09:38.480 00:09:38.480 ' 00:09:38.480 12:08:34 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:38.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.480 --rc genhtml_branch_coverage=1 00:09:38.480 --rc genhtml_function_coverage=1 00:09:38.480 --rc genhtml_legend=1 00:09:38.480 --rc geninfo_all_blocks=1 00:09:38.480 --rc geninfo_unexecuted_blocks=1 00:09:38.480 00:09:38.480 ' 00:09:38.480 12:08:34 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:38.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.480 --rc genhtml_branch_coverage=1 00:09:38.480 --rc genhtml_function_coverage=1 00:09:38.480 --rc genhtml_legend=1 00:09:38.480 --rc geninfo_all_blocks=1 00:09:38.480 --rc geninfo_unexecuted_blocks=1 00:09:38.480 00:09:38.480 ' 00:09:38.480 12:08:34 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:38.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.480 --rc genhtml_branch_coverage=1 00:09:38.480 --rc genhtml_function_coverage=1 00:09:38.480 --rc genhtml_legend=1 00:09:38.480 --rc geninfo_all_blocks=1 00:09:38.480 --rc geninfo_unexecuted_blocks=1 00:09:38.480 00:09:38.480 ' 00:09:38.480 12:08:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:09:38.480 12:08:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57946 00:09:38.480 12:08:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57946 00:09:38.480 12:08:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:38.480 12:08:34 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 57946 ']' 00:09:38.480 12:08:34 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.480 12:08:34 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:38.480 12:08:34 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.480 12:08:34 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:38.480 12:08:34 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:38.740 [2024-11-25 12:08:34.610844] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:09:38.740 [2024-11-25 12:08:34.611305] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57946 ] 00:09:38.740 [2024-11-25 12:08:34.794441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.998 [2024-11-25 12:08:34.927865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.936 12:08:35 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:39.936 12:08:35 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:09:39.936 12:08:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:09:39.936 12:08:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:09:39.936 12:08:35 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.936 12:08:35 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:39.936 { 00:09:39.936 "filename": "/tmp/spdk_mem_dump.txt" 00:09:39.936 } 00:09:39.936 12:08:35 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.936 12:08:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:09:39.936 DPDK memory size 816.000000 MiB in 1 heap(s) 00:09:39.936 1 heaps totaling size 816.000000 MiB 00:09:39.936 size: 816.000000 MiB heap id: 0 00:09:39.936 end heaps---------- 00:09:39.936 9 mempools totaling size 595.772034 MiB 00:09:39.936 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:09:39.936 size: 158.602051 MiB name: PDU_data_out_Pool 00:09:39.936 size: 92.545471 MiB name: bdev_io_57946 00:09:39.936 size: 50.003479 MiB name: msgpool_57946 00:09:39.936 size: 36.509338 MiB name: fsdev_io_57946 00:09:39.936 size: 21.763794 MiB name: PDU_Pool 00:09:39.936 size: 19.513306 MiB name: SCSI_TASK_Pool 00:09:39.936 size: 4.133484 MiB name: evtpool_57946 00:09:39.936 size: 0.026123 MiB name: Session_Pool 00:09:39.936 end mempools------- 00:09:39.936 6 memzones totaling size 4.142822 MiB 00:09:39.936 size: 1.000366 MiB name: RG_ring_0_57946 00:09:39.936 size: 1.000366 MiB name: RG_ring_1_57946 00:09:39.936 size: 1.000366 MiB name: RG_ring_4_57946 00:09:39.936 size: 1.000366 MiB name: RG_ring_5_57946 00:09:39.936 size: 0.125366 MiB name: RG_ring_2_57946 00:09:39.936 size: 0.015991 MiB name: RG_ring_3_57946 00:09:39.936 end memzones------- 00:09:39.936 12:08:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:09:39.936 heap id: 0 total size: 816.000000 MiB number of busy elements: 312 number of free elements: 18 00:09:39.936 list of free elements. size: 16.792114 MiB 00:09:39.936 element at address: 0x200006400000 with size: 1.995972 MiB 00:09:39.936 element at address: 0x20000a600000 with size: 1.995972 MiB 00:09:39.936 element at address: 0x200003e00000 with size: 1.991028 MiB 00:09:39.936 element at address: 0x200018d00040 with size: 0.999939 MiB 00:09:39.936 element at address: 0x200019100040 with size: 0.999939 MiB 00:09:39.936 element at address: 0x200019200000 with size: 0.999084 MiB 00:09:39.936 element at address: 0x200031e00000 with size: 0.994324 MiB 00:09:39.936 element at address: 0x200000400000 with size: 0.992004 MiB 00:09:39.936 element at address: 0x200018a00000 with size: 0.959656 MiB 00:09:39.936 element at address: 0x200019500040 with size: 0.936401 MiB 00:09:39.936 element at address: 0x200000200000 with size: 0.716980 MiB 00:09:39.936 element at address: 0x20001ac00000 with size: 0.562683 MiB 00:09:39.936 element at address: 0x200000c00000 with size: 0.490173 MiB 00:09:39.936 element at address: 0x200018e00000 with size: 0.487976 MiB 00:09:39.936 element at address: 0x200019600000 with size: 0.485413 MiB 00:09:39.936 element at address: 0x200012c00000 with size: 0.443237 MiB 00:09:39.936 element at address: 0x200028000000 with size: 0.390442 MiB 00:09:39.936 element at address: 0x200000800000 with size: 0.350891 MiB 00:09:39.936 list of standard malloc elements. size: 199.286987 MiB 00:09:39.936 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:09:39.936 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:09:39.936 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:09:39.936 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:09:39.936 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:09:39.936 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:09:39.936 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:09:39.936 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:09:39.936 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:09:39.936 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:09:39.936 element at address: 0x200012bff040 with size: 0.000305 MiB 00:09:39.936 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:09:39.936 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:09:39.936 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:09:39.936 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:09:39.936 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:09:39.936 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:09:39.936 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:09:39.936 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:09:39.936 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:09:39.936 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:09:39.936 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:09:39.936 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:09:39.936 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:09:39.936 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:09:39.936 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:09:39.936 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:09:39.936 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:09:39.936 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:09:39.936 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:09:39.936 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:09:39.936 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:09:39.936 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:09:39.936 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:09:39.936 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:09:39.936 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:09:39.936 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:09:39.936 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:09:39.936 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:09:39.936 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:09:39.936 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:09:39.936 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:09:39.936 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:09:39.936 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:09:39.936 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:09:39.936 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:09:39.936 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:09:39.936 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:09:39.936 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:09:39.936 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:09:39.936 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:09:39.936 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:09:39.936 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:09:39.936 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:09:39.936 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:09:39.936 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:09:39.936 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:09:39.936 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:09:39.936 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:09:39.936 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:09:39.936 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:09:39.936 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:09:39.936 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:09:39.936 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:09:39.936 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:09:39.936 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:09:39.936 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:09:39.936 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:09:39.936 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:09:39.936 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:09:39.936 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:09:39.936 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:09:39.936 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:09:39.936 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:09:39.936 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:09:39.936 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:09:39.936 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:09:39.937 element at address: 0x200000cff000 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:09:39.937 element at address: 0x200012bff180 with size: 0.000244 MiB 00:09:39.937 element at address: 0x200012bff280 with size: 0.000244 MiB 00:09:39.937 element at address: 0x200012bff380 with size: 0.000244 MiB 00:09:39.937 element at address: 0x200012bff480 with size: 0.000244 MiB 00:09:39.937 element at address: 0x200012bff580 with size: 0.000244 MiB 00:09:39.937 element at address: 0x200012bff680 with size: 0.000244 MiB 00:09:39.937 element at address: 0x200012bff780 with size: 0.000244 MiB 00:09:39.937 element at address: 0x200012bff880 with size: 0.000244 MiB 00:09:39.937 element at address: 0x200012bff980 with size: 0.000244 MiB 00:09:39.937 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:09:39.937 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:09:39.937 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:09:39.937 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:09:39.937 element at address: 0x200012c71780 with size: 0.000244 MiB 00:09:39.937 element at address: 0x200012c71880 with size: 0.000244 MiB 00:09:39.937 element at address: 0x200012c71980 with size: 0.000244 MiB 00:09:39.937 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:09:39.937 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:09:39.937 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:09:39.937 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:09:39.937 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:09:39.937 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:09:39.937 element at address: 0x200012c72080 with size: 0.000244 MiB 00:09:39.937 element at address: 0x200012c72180 with size: 0.000244 MiB 00:09:39.937 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:09:39.937 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:09:39.937 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:09:39.937 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:09:39.937 element at address: 0x200028063f40 with size: 0.000244 MiB 00:09:39.937 element at address: 0x200028064040 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806af80 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806b080 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806b180 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806b280 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806b380 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806b480 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806b580 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806b680 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806b780 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806b880 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806b980 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806be80 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806c080 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806c180 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806c280 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806c380 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806c480 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806c580 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806c680 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806c780 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806c880 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806c980 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806d080 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806d180 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806d280 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806d380 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806d480 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806d580 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806d680 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806d780 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806d880 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806d980 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806da80 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806db80 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806de80 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806df80 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806e080 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806e180 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806e280 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806e380 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806e480 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806e580 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806e680 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806e780 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806e880 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806e980 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806f080 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806f180 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806f280 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806f380 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806f480 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806f580 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806f680 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806f780 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806f880 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806f980 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:09:39.938 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:09:39.938 list of memzone associated elements. size: 599.920898 MiB 00:09:39.938 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:09:39.938 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:09:39.938 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:09:39.938 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:09:39.938 element at address: 0x200012df4740 with size: 92.045105 MiB 00:09:39.938 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_57946_0 00:09:39.938 element at address: 0x200000dff340 with size: 48.003113 MiB 00:09:39.938 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57946_0 00:09:39.938 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:09:39.938 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57946_0 00:09:39.938 element at address: 0x2000197be900 with size: 20.255615 MiB 00:09:39.938 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:09:39.938 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:09:39.938 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:09:39.938 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:09:39.938 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57946_0 00:09:39.938 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:09:39.938 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57946 00:09:39.938 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:09:39.938 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57946 00:09:39.938 element at address: 0x200018efde00 with size: 1.008179 MiB 00:09:39.938 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:09:39.938 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:09:39.938 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:09:39.938 element at address: 0x200018afde00 with size: 1.008179 MiB 00:09:39.938 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:09:39.938 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:09:39.938 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:09:39.938 element at address: 0x200000cff100 with size: 1.000549 MiB 00:09:39.938 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57946 00:09:39.938 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:09:39.938 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57946 00:09:39.938 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:09:39.938 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57946 00:09:39.938 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:09:39.938 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57946 00:09:39.938 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:09:39.938 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57946 00:09:39.938 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:09:39.938 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57946 00:09:39.938 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:09:39.938 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:09:39.938 element at address: 0x200012c72280 with size: 0.500549 MiB 00:09:39.938 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:09:39.938 element at address: 0x20001967c440 with size: 0.250549 MiB 00:09:39.938 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:09:39.938 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:09:39.938 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57946 00:09:39.938 element at address: 0x20000085df80 with size: 0.125549 MiB 00:09:39.938 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57946 00:09:39.938 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:09:39.938 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:09:39.938 element at address: 0x200028064140 with size: 0.023804 MiB 00:09:39.938 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:09:39.938 element at address: 0x200000859d40 with size: 0.016174 MiB 00:09:39.938 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57946 00:09:39.938 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:09:39.938 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:09:39.938 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:09:39.938 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57946 00:09:39.938 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:09:39.938 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57946 00:09:39.938 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:09:39.938 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57946 00:09:39.939 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:09:39.939 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:09:39.939 12:08:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:09:39.939 12:08:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57946 00:09:39.939 12:08:35 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 57946 ']' 00:09:39.939 12:08:35 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 57946 00:09:39.939 12:08:35 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:09:39.939 12:08:35 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:39.939 12:08:35 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57946 00:09:39.939 killing process with pid 57946 00:09:39.939 12:08:35 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:39.939 12:08:35 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:39.939 12:08:35 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57946' 00:09:39.939 12:08:35 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 57946 00:09:39.939 12:08:35 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 57946 00:09:42.473 00:09:42.473 real 0m3.840s 00:09:42.473 user 0m3.863s 00:09:42.473 sys 0m0.633s 00:09:42.473 12:08:38 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:42.473 12:08:38 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:42.473 ************************************ 00:09:42.473 END TEST dpdk_mem_utility 00:09:42.473 ************************************ 00:09:42.473 12:08:38 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:09:42.473 12:08:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:42.473 12:08:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:42.473 12:08:38 -- common/autotest_common.sh@10 -- # set +x 00:09:42.473 ************************************ 00:09:42.473 START TEST event 00:09:42.473 ************************************ 00:09:42.473 12:08:38 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:09:42.473 * Looking for test storage... 00:09:42.473 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:42.473 12:08:38 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:42.473 12:08:38 event -- common/autotest_common.sh@1693 -- # lcov --version 00:09:42.473 12:08:38 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:42.473 12:08:38 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:42.473 12:08:38 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:42.473 12:08:38 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:42.473 12:08:38 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:42.473 12:08:38 event -- scripts/common.sh@336 -- # IFS=.-: 00:09:42.473 12:08:38 event -- scripts/common.sh@336 -- # read -ra ver1 00:09:42.473 12:08:38 event -- scripts/common.sh@337 -- # IFS=.-: 00:09:42.473 12:08:38 event -- scripts/common.sh@337 -- # read -ra ver2 00:09:42.473 12:08:38 event -- scripts/common.sh@338 -- # local 'op=<' 00:09:42.473 12:08:38 event -- scripts/common.sh@340 -- # ver1_l=2 00:09:42.473 12:08:38 event -- scripts/common.sh@341 -- # ver2_l=1 00:09:42.473 12:08:38 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:42.473 12:08:38 event -- scripts/common.sh@344 -- # case "$op" in 00:09:42.473 12:08:38 event -- scripts/common.sh@345 -- # : 1 00:09:42.473 12:08:38 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:42.473 12:08:38 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:42.473 12:08:38 event -- scripts/common.sh@365 -- # decimal 1 00:09:42.473 12:08:38 event -- scripts/common.sh@353 -- # local d=1 00:09:42.473 12:08:38 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:42.473 12:08:38 event -- scripts/common.sh@355 -- # echo 1 00:09:42.473 12:08:38 event -- scripts/common.sh@365 -- # ver1[v]=1 00:09:42.473 12:08:38 event -- scripts/common.sh@366 -- # decimal 2 00:09:42.473 12:08:38 event -- scripts/common.sh@353 -- # local d=2 00:09:42.473 12:08:38 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:42.473 12:08:38 event -- scripts/common.sh@355 -- # echo 2 00:09:42.473 12:08:38 event -- scripts/common.sh@366 -- # ver2[v]=2 00:09:42.473 12:08:38 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:42.473 12:08:38 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:42.473 12:08:38 event -- scripts/common.sh@368 -- # return 0 00:09:42.473 12:08:38 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:42.473 12:08:38 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:42.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.473 --rc genhtml_branch_coverage=1 00:09:42.473 --rc genhtml_function_coverage=1 00:09:42.473 --rc genhtml_legend=1 00:09:42.473 --rc geninfo_all_blocks=1 00:09:42.473 --rc geninfo_unexecuted_blocks=1 00:09:42.473 00:09:42.473 ' 00:09:42.473 12:08:38 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:42.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.473 --rc genhtml_branch_coverage=1 00:09:42.473 --rc genhtml_function_coverage=1 00:09:42.473 --rc genhtml_legend=1 00:09:42.473 --rc geninfo_all_blocks=1 00:09:42.473 --rc geninfo_unexecuted_blocks=1 00:09:42.473 00:09:42.473 ' 00:09:42.473 12:08:38 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:42.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.473 --rc genhtml_branch_coverage=1 00:09:42.473 --rc genhtml_function_coverage=1 00:09:42.473 --rc genhtml_legend=1 00:09:42.473 --rc geninfo_all_blocks=1 00:09:42.473 --rc geninfo_unexecuted_blocks=1 00:09:42.473 00:09:42.473 ' 00:09:42.473 12:08:38 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:42.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.473 --rc genhtml_branch_coverage=1 00:09:42.473 --rc genhtml_function_coverage=1 00:09:42.473 --rc genhtml_legend=1 00:09:42.473 --rc geninfo_all_blocks=1 00:09:42.473 --rc geninfo_unexecuted_blocks=1 00:09:42.473 00:09:42.473 ' 00:09:42.473 12:08:38 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:42.473 12:08:38 event -- bdev/nbd_common.sh@6 -- # set -e 00:09:42.473 12:08:38 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:42.473 12:08:38 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:09:42.473 12:08:38 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:42.473 12:08:38 event -- common/autotest_common.sh@10 -- # set +x 00:09:42.473 ************************************ 00:09:42.473 START TEST event_perf 00:09:42.473 ************************************ 00:09:42.473 12:08:38 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:42.474 Running I/O for 1 seconds...[2024-11-25 12:08:38.456421] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:09:42.474 [2024-11-25 12:08:38.456778] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58054 ] 00:09:42.732 [2024-11-25 12:08:38.642913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:42.732 [2024-11-25 12:08:38.782531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:42.732 [2024-11-25 12:08:38.782677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:42.732 [2024-11-25 12:08:38.782791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.732 Running I/O for 1 seconds...[2024-11-25 12:08:38.782805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:44.117 00:09:44.117 lcore 0: 201591 00:09:44.117 lcore 1: 201589 00:09:44.117 lcore 2: 201591 00:09:44.117 lcore 3: 201591 00:09:44.117 done. 00:09:44.117 00:09:44.117 real 0m1.634s 00:09:44.117 user 0m4.376s 00:09:44.117 sys 0m0.132s 00:09:44.117 ************************************ 00:09:44.117 END TEST event_perf 00:09:44.117 ************************************ 00:09:44.117 12:08:40 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:44.117 12:08:40 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:09:44.117 12:08:40 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:09:44.117 12:08:40 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:44.117 12:08:40 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:44.117 12:08:40 event -- common/autotest_common.sh@10 -- # set +x 00:09:44.117 ************************************ 00:09:44.117 START TEST event_reactor 00:09:44.117 ************************************ 00:09:44.117 12:08:40 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:09:44.117 [2024-11-25 12:08:40.134609] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:09:44.117 [2024-11-25 12:08:40.135053] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58088 ] 00:09:44.376 [2024-11-25 12:08:40.318792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.666 [2024-11-25 12:08:40.476204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.602 test_start 00:09:45.602 oneshot 00:09:45.602 tick 100 00:09:45.602 tick 100 00:09:45.602 tick 250 00:09:45.602 tick 100 00:09:45.602 tick 100 00:09:45.602 tick 100 00:09:45.602 tick 250 00:09:45.602 tick 500 00:09:45.602 tick 100 00:09:45.602 tick 100 00:09:45.602 tick 250 00:09:45.602 tick 100 00:09:45.602 tick 100 00:09:45.602 test_end 00:09:45.862 ************************************ 00:09:45.862 END TEST event_reactor 00:09:45.862 ************************************ 00:09:45.862 00:09:45.862 real 0m1.602s 00:09:45.862 user 0m1.403s 00:09:45.862 sys 0m0.089s 00:09:45.862 12:08:41 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:45.862 12:08:41 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:09:45.862 12:08:41 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:45.862 12:08:41 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:45.862 12:08:41 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:45.862 12:08:41 event -- common/autotest_common.sh@10 -- # set +x 00:09:45.862 ************************************ 00:09:45.862 START TEST event_reactor_perf 00:09:45.862 ************************************ 00:09:45.862 12:08:41 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:45.862 [2024-11-25 12:08:41.799791] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:09:45.862 [2024-11-25 12:08:41.799984] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58130 ] 00:09:46.121 [2024-11-25 12:08:41.987218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.121 [2024-11-25 12:08:42.119655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.498 test_start 00:09:47.498 test_end 00:09:47.498 Performance: 272255 events per second 00:09:47.498 ************************************ 00:09:47.498 END TEST event_reactor_perf 00:09:47.498 ************************************ 00:09:47.498 00:09:47.498 real 0m1.614s 00:09:47.499 user 0m1.390s 00:09:47.499 sys 0m0.112s 00:09:47.499 12:08:43 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:47.499 12:08:43 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:09:47.499 12:08:43 event -- event/event.sh@49 -- # uname -s 00:09:47.499 12:08:43 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:09:47.499 12:08:43 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:09:47.499 12:08:43 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:47.499 12:08:43 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:47.499 12:08:43 event -- common/autotest_common.sh@10 -- # set +x 00:09:47.499 ************************************ 00:09:47.499 START TEST event_scheduler 00:09:47.499 ************************************ 00:09:47.499 12:08:43 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:09:47.499 * Looking for test storage... 00:09:47.499 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:09:47.499 12:08:43 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:47.499 12:08:43 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:09:47.499 12:08:43 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:47.499 12:08:43 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:47.499 12:08:43 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:47.499 12:08:43 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:47.499 12:08:43 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:47.499 12:08:43 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:09:47.499 12:08:43 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:09:47.499 12:08:43 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:09:47.499 12:08:43 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:09:47.499 12:08:43 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:09:47.499 12:08:43 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:09:47.499 12:08:43 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:09:47.499 12:08:43 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:47.499 12:08:43 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:09:47.499 12:08:43 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:09:47.499 12:08:43 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:47.499 12:08:43 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:47.499 12:08:43 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:09:47.499 12:08:43 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:09:47.499 12:08:43 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:47.499 12:08:43 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:09:47.499 12:08:43 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:09:47.499 12:08:43 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:09:47.757 12:08:43 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:09:47.757 12:08:43 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:47.757 12:08:43 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:09:47.757 12:08:43 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:09:47.757 12:08:43 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:47.757 12:08:43 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:47.757 12:08:43 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:09:47.757 12:08:43 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:47.757 12:08:43 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:47.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.757 --rc genhtml_branch_coverage=1 00:09:47.757 --rc genhtml_function_coverage=1 00:09:47.757 --rc genhtml_legend=1 00:09:47.757 --rc geninfo_all_blocks=1 00:09:47.757 --rc geninfo_unexecuted_blocks=1 00:09:47.757 00:09:47.757 ' 00:09:47.757 12:08:43 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:47.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.757 --rc genhtml_branch_coverage=1 00:09:47.757 --rc genhtml_function_coverage=1 00:09:47.757 --rc genhtml_legend=1 00:09:47.757 --rc geninfo_all_blocks=1 00:09:47.757 --rc geninfo_unexecuted_blocks=1 00:09:47.757 00:09:47.757 ' 00:09:47.757 12:08:43 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:47.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.757 --rc genhtml_branch_coverage=1 00:09:47.757 --rc genhtml_function_coverage=1 00:09:47.757 --rc genhtml_legend=1 00:09:47.757 --rc geninfo_all_blocks=1 00:09:47.757 --rc geninfo_unexecuted_blocks=1 00:09:47.757 00:09:47.757 ' 00:09:47.757 12:08:43 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:47.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.757 --rc genhtml_branch_coverage=1 00:09:47.757 --rc genhtml_function_coverage=1 00:09:47.757 --rc genhtml_legend=1 00:09:47.757 --rc geninfo_all_blocks=1 00:09:47.757 --rc geninfo_unexecuted_blocks=1 00:09:47.757 00:09:47.757 ' 00:09:47.757 12:08:43 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:09:47.757 12:08:43 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58206 00:09:47.757 12:08:43 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:09:47.757 12:08:43 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:09:47.757 12:08:43 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58206 00:09:47.757 12:08:43 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58206 ']' 00:09:47.757 12:08:43 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.757 12:08:43 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:47.757 12:08:43 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.757 12:08:43 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:47.757 12:08:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:47.758 [2024-11-25 12:08:43.709130] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:09:47.758 [2024-11-25 12:08:43.709649] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58206 ] 00:09:48.017 [2024-11-25 12:08:43.894866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:48.017 [2024-11-25 12:08:44.033723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.017 [2024-11-25 12:08:44.033874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:48.017 [2024-11-25 12:08:44.034035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:48.017 [2024-11-25 12:08:44.034143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:48.583 12:08:44 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:48.583 12:08:44 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:09:48.583 12:08:44 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:09:48.583 12:08:44 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.583 12:08:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:48.583 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:48.583 POWER: Cannot set governor of lcore 0 to userspace 00:09:48.583 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:48.583 POWER: Cannot set governor of lcore 0 to performance 00:09:48.583 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:48.583 POWER: Cannot set governor of lcore 0 to userspace 00:09:48.583 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:48.583 POWER: Cannot set governor of lcore 0 to userspace 00:09:48.583 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:09:48.583 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:09:48.583 POWER: Unable to set Power Management Environment for lcore 0 00:09:48.583 [2024-11-25 12:08:44.668627] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:09:48.583 [2024-11-25 12:08:44.668656] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:09:48.583 [2024-11-25 12:08:44.668670] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:09:48.583 [2024-11-25 12:08:44.668702] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:09:48.583 [2024-11-25 12:08:44.668715] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:09:48.583 [2024-11-25 12:08:44.668733] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:09:48.842 12:08:44 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.842 12:08:44 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:09:48.842 12:08:44 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.842 12:08:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:49.101 [2024-11-25 12:08:44.994333] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:09:49.101 12:08:44 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.101 12:08:44 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:09:49.101 12:08:44 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:49.101 12:08:44 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:49.101 12:08:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:49.101 ************************************ 00:09:49.101 START TEST scheduler_create_thread 00:09:49.101 ************************************ 00:09:49.101 12:08:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:09:49.101 12:08:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:09:49.101 12:08:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.101 12:08:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:49.101 2 00:09:49.101 12:08:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.101 12:08:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:09:49.101 12:08:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.101 12:08:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:49.101 3 00:09:49.101 12:08:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.101 12:08:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:09:49.101 12:08:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.101 12:08:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:49.101 4 00:09:49.101 12:08:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.101 12:08:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:09:49.101 12:08:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.101 12:08:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:49.101 5 00:09:49.101 12:08:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.101 12:08:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:09:49.101 12:08:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.101 12:08:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:49.101 6 00:09:49.101 12:08:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.101 12:08:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:09:49.101 12:08:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.101 12:08:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:49.101 7 00:09:49.101 12:08:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.101 12:08:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:09:49.101 12:08:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.101 12:08:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:49.101 8 00:09:49.101 12:08:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.101 12:08:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:09:49.101 12:08:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.101 12:08:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:49.101 9 00:09:49.101 12:08:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.101 12:08:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:09:49.101 12:08:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.101 12:08:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:49.101 10 00:09:49.101 12:08:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.101 12:08:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:09:49.101 12:08:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.101 12:08:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:49.101 12:08:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.101 12:08:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:09:49.101 12:08:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:09:49.101 12:08:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.101 12:08:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:49.101 12:08:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.101 12:08:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:09:49.101 12:08:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.101 12:08:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:50.546 12:08:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.546 12:08:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:09:50.546 12:08:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:09:50.546 12:08:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.546 12:08:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:51.926 ************************************ 00:09:51.926 END TEST scheduler_create_thread 00:09:51.926 ************************************ 00:09:51.926 12:08:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.926 00:09:51.926 real 0m2.622s 00:09:51.926 user 0m0.018s 00:09:51.926 sys 0m0.004s 00:09:51.926 12:08:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:51.926 12:08:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:51.926 12:08:47 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:09:51.926 12:08:47 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58206 00:09:51.926 12:08:47 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58206 ']' 00:09:51.926 12:08:47 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58206 00:09:51.926 12:08:47 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:09:51.926 12:08:47 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:51.926 12:08:47 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58206 00:09:51.926 killing process with pid 58206 00:09:51.926 12:08:47 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:09:51.926 12:08:47 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:09:51.926 12:08:47 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58206' 00:09:51.926 12:08:47 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58206 00:09:51.926 12:08:47 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58206 00:09:52.186 [2024-11-25 12:08:48.109189] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:09:53.122 ************************************ 00:09:53.122 END TEST event_scheduler 00:09:53.122 ************************************ 00:09:53.122 00:09:53.122 real 0m5.781s 00:09:53.122 user 0m10.118s 00:09:53.122 sys 0m0.491s 00:09:53.122 12:08:49 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:53.122 12:08:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:53.381 12:08:49 event -- event/event.sh@51 -- # modprobe -n nbd 00:09:53.381 12:08:49 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:09:53.381 12:08:49 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:53.381 12:08:49 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:53.381 12:08:49 event -- common/autotest_common.sh@10 -- # set +x 00:09:53.381 ************************************ 00:09:53.381 START TEST app_repeat 00:09:53.381 ************************************ 00:09:53.381 12:08:49 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:09:53.381 12:08:49 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:53.381 12:08:49 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:53.381 12:08:49 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:09:53.381 12:08:49 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:53.381 12:08:49 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:09:53.381 12:08:49 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:09:53.381 12:08:49 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:09:53.381 12:08:49 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58312 00:09:53.381 12:08:49 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:09:53.381 Process app_repeat pid: 58312 00:09:53.381 12:08:49 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58312' 00:09:53.381 spdk_app_start Round 0 00:09:53.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:53.381 12:08:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:53.381 12:08:49 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:09:53.381 12:08:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:09:53.381 12:08:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58312 /var/tmp/spdk-nbd.sock 00:09:53.381 12:08:49 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58312 ']' 00:09:53.381 12:08:49 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:53.381 12:08:49 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:53.381 12:08:49 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:53.381 12:08:49 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:53.381 12:08:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:53.381 [2024-11-25 12:08:49.312397] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:09:53.381 [2024-11-25 12:08:49.312816] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58312 ] 00:09:53.640 [2024-11-25 12:08:49.501812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:53.640 [2024-11-25 12:08:49.639447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.640 [2024-11-25 12:08:49.639467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:54.579 12:08:50 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:54.579 12:08:50 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:54.580 12:08:50 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:54.839 Malloc0 00:09:54.839 12:08:50 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:55.098 Malloc1 00:09:55.098 12:08:51 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:55.098 12:08:51 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:55.098 12:08:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:55.098 12:08:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:55.098 12:08:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:55.098 12:08:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:55.098 12:08:51 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:55.098 12:08:51 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:55.098 12:08:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:55.098 12:08:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:55.098 12:08:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:55.098 12:08:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:55.098 12:08:51 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:55.098 12:08:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:55.098 12:08:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:55.098 12:08:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:55.356 /dev/nbd0 00:09:55.357 12:08:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:55.357 12:08:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:55.357 12:08:51 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:55.357 12:08:51 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:55.357 12:08:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:55.357 12:08:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:55.357 12:08:51 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:55.357 12:08:51 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:55.357 12:08:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:55.357 12:08:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:55.357 12:08:51 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:55.357 1+0 records in 00:09:55.357 1+0 records out 00:09:55.357 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000405321 s, 10.1 MB/s 00:09:55.357 12:08:51 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:55.357 12:08:51 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:55.357 12:08:51 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:55.357 12:08:51 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:55.357 12:08:51 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:55.357 12:08:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:55.357 12:08:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:55.357 12:08:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:55.616 /dev/nbd1 00:09:55.616 12:08:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:55.616 12:08:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:55.616 12:08:51 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:55.616 12:08:51 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:55.616 12:08:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:55.616 12:08:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:55.616 12:08:51 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:55.616 12:08:51 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:55.616 12:08:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:55.616 12:08:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:55.616 12:08:51 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:55.616 1+0 records in 00:09:55.616 1+0 records out 00:09:55.616 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000345061 s, 11.9 MB/s 00:09:55.873 12:08:51 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:55.873 12:08:51 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:55.873 12:08:51 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:55.873 12:08:51 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:55.873 12:08:51 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:55.873 12:08:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:55.873 12:08:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:55.873 12:08:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:55.873 12:08:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:55.873 12:08:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:56.131 12:08:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:56.131 { 00:09:56.131 "nbd_device": "/dev/nbd0", 00:09:56.131 "bdev_name": "Malloc0" 00:09:56.131 }, 00:09:56.131 { 00:09:56.131 "nbd_device": "/dev/nbd1", 00:09:56.131 "bdev_name": "Malloc1" 00:09:56.131 } 00:09:56.131 ]' 00:09:56.131 12:08:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:56.131 { 00:09:56.131 "nbd_device": "/dev/nbd0", 00:09:56.131 "bdev_name": "Malloc0" 00:09:56.131 }, 00:09:56.131 { 00:09:56.131 "nbd_device": "/dev/nbd1", 00:09:56.131 "bdev_name": "Malloc1" 00:09:56.131 } 00:09:56.131 ]' 00:09:56.131 12:08:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:56.131 12:08:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:56.131 /dev/nbd1' 00:09:56.131 12:08:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:56.131 /dev/nbd1' 00:09:56.131 12:08:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:56.131 12:08:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:56.131 12:08:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:56.131 12:08:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:56.131 12:08:52 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:56.131 12:08:52 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:56.131 12:08:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:56.131 12:08:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:56.131 12:08:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:56.131 12:08:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:56.131 12:08:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:56.131 12:08:52 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:56.131 256+0 records in 00:09:56.131 256+0 records out 00:09:56.131 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00818277 s, 128 MB/s 00:09:56.131 12:08:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:56.131 12:08:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:56.131 256+0 records in 00:09:56.131 256+0 records out 00:09:56.131 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0368486 s, 28.5 MB/s 00:09:56.131 12:08:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:56.131 12:08:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:56.131 256+0 records in 00:09:56.131 256+0 records out 00:09:56.131 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0423199 s, 24.8 MB/s 00:09:56.131 12:08:52 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:56.131 12:08:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:56.131 12:08:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:56.131 12:08:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:56.131 12:08:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:56.131 12:08:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:56.131 12:08:52 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:56.131 12:08:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:56.131 12:08:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:56.131 12:08:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:56.131 12:08:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:56.131 12:08:52 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:56.131 12:08:52 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:56.131 12:08:52 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:56.131 12:08:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:56.131 12:08:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:56.131 12:08:52 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:56.131 12:08:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:56.131 12:08:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:56.390 12:08:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:56.390 12:08:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:56.390 12:08:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:56.390 12:08:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:56.390 12:08:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:56.390 12:08:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:56.390 12:08:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:56.390 12:08:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:56.390 12:08:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:56.390 12:08:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:56.957 12:08:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:56.957 12:08:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:56.957 12:08:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:56.957 12:08:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:56.957 12:08:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:56.957 12:08:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:56.957 12:08:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:56.957 12:08:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:56.957 12:08:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:56.957 12:08:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:56.957 12:08:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:57.215 12:08:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:57.215 12:08:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:57.215 12:08:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:57.215 12:08:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:57.215 12:08:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:57.215 12:08:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:57.215 12:08:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:57.215 12:08:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:57.215 12:08:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:57.215 12:08:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:57.215 12:08:53 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:57.215 12:08:53 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:57.215 12:08:53 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:57.782 12:08:53 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:58.740 [2024-11-25 12:08:54.693576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:59.015 [2024-11-25 12:08:54.823848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:59.015 [2024-11-25 12:08:54.823857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.015 [2024-11-25 12:08:55.022510] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:59.015 [2024-11-25 12:08:55.022594] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:00.918 spdk_app_start Round 1 00:10:00.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:00.918 12:08:56 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:00.918 12:08:56 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:10:00.918 12:08:56 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58312 /var/tmp/spdk-nbd.sock 00:10:00.918 12:08:56 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58312 ']' 00:10:00.918 12:08:56 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:00.918 12:08:56 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:00.918 12:08:56 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:00.918 12:08:56 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:00.918 12:08:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:00.918 12:08:56 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:00.918 12:08:56 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:10:00.918 12:08:56 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:01.483 Malloc0 00:10:01.483 12:08:57 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:01.741 Malloc1 00:10:01.741 12:08:57 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:01.741 12:08:57 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:01.741 12:08:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:01.741 12:08:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:01.742 12:08:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:01.742 12:08:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:01.742 12:08:57 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:01.742 12:08:57 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:01.742 12:08:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:01.742 12:08:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:01.742 12:08:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:01.742 12:08:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:01.742 12:08:57 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:01.742 12:08:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:01.742 12:08:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:01.742 12:08:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:01.999 /dev/nbd0 00:10:01.999 12:08:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:01.999 12:08:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:01.999 12:08:57 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:01.999 12:08:57 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:02.000 12:08:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:02.000 12:08:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:02.000 12:08:57 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:02.000 12:08:57 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:02.000 12:08:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:02.000 12:08:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:02.000 12:08:57 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:02.000 1+0 records in 00:10:02.000 1+0 records out 00:10:02.000 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000491944 s, 8.3 MB/s 00:10:02.000 12:08:57 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:02.000 12:08:57 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:02.000 12:08:57 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:02.000 12:08:57 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:02.000 12:08:57 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:02.000 12:08:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:02.000 12:08:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:02.000 12:08:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:02.258 /dev/nbd1 00:10:02.258 12:08:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:02.258 12:08:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:02.258 12:08:58 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:02.258 12:08:58 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:02.258 12:08:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:02.258 12:08:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:02.258 12:08:58 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:02.258 12:08:58 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:02.258 12:08:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:02.258 12:08:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:02.258 12:08:58 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:02.258 1+0 records in 00:10:02.258 1+0 records out 00:10:02.258 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000291805 s, 14.0 MB/s 00:10:02.258 12:08:58 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:02.258 12:08:58 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:02.258 12:08:58 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:02.258 12:08:58 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:02.258 12:08:58 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:02.258 12:08:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:02.258 12:08:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:02.258 12:08:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:02.258 12:08:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:02.258 12:08:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:02.517 12:08:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:02.517 { 00:10:02.517 "nbd_device": "/dev/nbd0", 00:10:02.517 "bdev_name": "Malloc0" 00:10:02.517 }, 00:10:02.517 { 00:10:02.517 "nbd_device": "/dev/nbd1", 00:10:02.517 "bdev_name": "Malloc1" 00:10:02.517 } 00:10:02.517 ]' 00:10:02.517 12:08:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:02.517 { 00:10:02.517 "nbd_device": "/dev/nbd0", 00:10:02.517 "bdev_name": "Malloc0" 00:10:02.517 }, 00:10:02.517 { 00:10:02.517 "nbd_device": "/dev/nbd1", 00:10:02.517 "bdev_name": "Malloc1" 00:10:02.517 } 00:10:02.517 ]' 00:10:02.517 12:08:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:02.517 12:08:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:02.517 /dev/nbd1' 00:10:02.517 12:08:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:02.517 /dev/nbd1' 00:10:02.517 12:08:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:02.517 12:08:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:02.517 12:08:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:02.517 12:08:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:02.517 12:08:58 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:02.517 12:08:58 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:02.517 12:08:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:02.517 12:08:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:02.517 12:08:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:02.517 12:08:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:02.517 12:08:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:02.517 12:08:58 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:02.517 256+0 records in 00:10:02.517 256+0 records out 00:10:02.517 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00716941 s, 146 MB/s 00:10:02.517 12:08:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:02.517 12:08:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:02.775 256+0 records in 00:10:02.775 256+0 records out 00:10:02.775 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0332204 s, 31.6 MB/s 00:10:02.775 12:08:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:02.775 12:08:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:02.775 256+0 records in 00:10:02.775 256+0 records out 00:10:02.775 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0333897 s, 31.4 MB/s 00:10:02.775 12:08:58 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:02.775 12:08:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:02.775 12:08:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:02.775 12:08:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:02.775 12:08:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:02.775 12:08:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:02.775 12:08:58 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:02.775 12:08:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:02.775 12:08:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:02.775 12:08:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:02.775 12:08:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:02.775 12:08:58 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:02.775 12:08:58 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:02.775 12:08:58 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:02.775 12:08:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:02.775 12:08:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:02.775 12:08:58 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:02.775 12:08:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:02.775 12:08:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:03.032 12:08:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:03.032 12:08:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:03.032 12:08:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:03.032 12:08:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:03.032 12:08:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:03.032 12:08:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:03.032 12:08:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:03.032 12:08:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:03.032 12:08:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:03.032 12:08:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:03.290 12:08:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:03.290 12:08:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:03.290 12:08:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:03.290 12:08:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:03.290 12:08:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:03.290 12:08:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:03.290 12:08:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:03.290 12:08:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:03.290 12:08:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:03.290 12:08:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:03.290 12:08:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:03.549 12:08:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:03.549 12:08:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:03.549 12:08:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:03.807 12:08:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:03.807 12:08:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:03.807 12:08:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:03.807 12:08:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:03.807 12:08:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:03.807 12:08:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:03.807 12:08:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:03.807 12:08:59 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:03.807 12:08:59 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:03.807 12:08:59 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:04.375 12:09:00 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:05.312 [2024-11-25 12:09:01.237171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:05.312 [2024-11-25 12:09:01.367137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.312 [2024-11-25 12:09:01.367154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:05.570 [2024-11-25 12:09:01.556868] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:05.570 [2024-11-25 12:09:01.556985] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:07.475 spdk_app_start Round 2 00:10:07.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:07.475 12:09:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:07.475 12:09:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:10:07.475 12:09:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58312 /var/tmp/spdk-nbd.sock 00:10:07.475 12:09:03 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58312 ']' 00:10:07.475 12:09:03 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:07.475 12:09:03 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:07.475 12:09:03 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:07.475 12:09:03 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:07.475 12:09:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:07.475 12:09:03 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:07.475 12:09:03 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:10:07.475 12:09:03 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:08.043 Malloc0 00:10:08.043 12:09:03 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:08.301 Malloc1 00:10:08.301 12:09:04 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:08.301 12:09:04 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:08.301 12:09:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:08.301 12:09:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:08.301 12:09:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:08.301 12:09:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:08.301 12:09:04 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:08.301 12:09:04 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:08.301 12:09:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:08.301 12:09:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:08.301 12:09:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:08.301 12:09:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:08.301 12:09:04 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:08.301 12:09:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:08.301 12:09:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:08.301 12:09:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:08.559 /dev/nbd0 00:10:08.559 12:09:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:08.559 12:09:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:08.559 12:09:04 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:08.559 12:09:04 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:08.559 12:09:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:08.559 12:09:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:08.559 12:09:04 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:08.559 12:09:04 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:08.559 12:09:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:08.559 12:09:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:08.559 12:09:04 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:08.559 1+0 records in 00:10:08.559 1+0 records out 00:10:08.559 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000715405 s, 5.7 MB/s 00:10:08.559 12:09:04 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:08.559 12:09:04 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:08.559 12:09:04 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:08.559 12:09:04 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:08.559 12:09:04 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:08.559 12:09:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:08.559 12:09:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:08.560 12:09:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:08.817 /dev/nbd1 00:10:08.817 12:09:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:09.076 12:09:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:09.076 12:09:04 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:09.076 12:09:04 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:09.076 12:09:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:09.076 12:09:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:09.076 12:09:04 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:09.076 12:09:04 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:09.076 12:09:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:09.076 12:09:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:09.076 12:09:04 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:09.076 1+0 records in 00:10:09.076 1+0 records out 00:10:09.076 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000378499 s, 10.8 MB/s 00:10:09.076 12:09:04 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:09.076 12:09:04 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:09.076 12:09:04 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:09.076 12:09:04 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:09.076 12:09:04 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:09.076 12:09:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:09.076 12:09:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:09.076 12:09:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:09.077 12:09:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:09.077 12:09:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:09.337 12:09:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:09.337 { 00:10:09.337 "nbd_device": "/dev/nbd0", 00:10:09.337 "bdev_name": "Malloc0" 00:10:09.337 }, 00:10:09.337 { 00:10:09.337 "nbd_device": "/dev/nbd1", 00:10:09.337 "bdev_name": "Malloc1" 00:10:09.337 } 00:10:09.337 ]' 00:10:09.337 12:09:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:09.337 { 00:10:09.337 "nbd_device": "/dev/nbd0", 00:10:09.337 "bdev_name": "Malloc0" 00:10:09.337 }, 00:10:09.337 { 00:10:09.337 "nbd_device": "/dev/nbd1", 00:10:09.337 "bdev_name": "Malloc1" 00:10:09.337 } 00:10:09.337 ]' 00:10:09.337 12:09:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:09.337 12:09:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:09.337 /dev/nbd1' 00:10:09.337 12:09:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:09.337 /dev/nbd1' 00:10:09.337 12:09:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:09.337 12:09:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:09.337 12:09:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:09.337 12:09:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:09.337 12:09:05 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:09.337 12:09:05 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:09.337 12:09:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:09.337 12:09:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:09.337 12:09:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:09.337 12:09:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:09.337 12:09:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:09.337 12:09:05 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:09.337 256+0 records in 00:10:09.337 256+0 records out 00:10:09.337 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00714254 s, 147 MB/s 00:10:09.337 12:09:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:09.337 12:09:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:09.337 256+0 records in 00:10:09.337 256+0 records out 00:10:09.337 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0346797 s, 30.2 MB/s 00:10:09.337 12:09:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:09.337 12:09:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:09.337 256+0 records in 00:10:09.337 256+0 records out 00:10:09.337 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0391317 s, 26.8 MB/s 00:10:09.337 12:09:05 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:09.337 12:09:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:09.337 12:09:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:09.337 12:09:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:09.337 12:09:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:09.337 12:09:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:09.337 12:09:05 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:09.337 12:09:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:09.337 12:09:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:09.337 12:09:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:09.337 12:09:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:09.337 12:09:05 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:09.337 12:09:05 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:09.337 12:09:05 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:09.337 12:09:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:09.337 12:09:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:09.337 12:09:05 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:09.337 12:09:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:09.338 12:09:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:09.904 12:09:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:09.904 12:09:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:09.904 12:09:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:09.904 12:09:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:09.904 12:09:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:09.904 12:09:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:09.904 12:09:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:09.904 12:09:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:09.904 12:09:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:09.904 12:09:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:10.164 12:09:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:10.164 12:09:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:10.164 12:09:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:10.164 12:09:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:10.164 12:09:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:10.164 12:09:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:10.164 12:09:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:10.164 12:09:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:10.164 12:09:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:10.164 12:09:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:10.164 12:09:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:10.423 12:09:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:10.423 12:09:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:10.423 12:09:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:10.423 12:09:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:10.423 12:09:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:10.423 12:09:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:10.423 12:09:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:10.423 12:09:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:10.423 12:09:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:10.423 12:09:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:10.423 12:09:06 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:10.423 12:09:06 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:10.423 12:09:06 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:11.047 12:09:06 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:12.427 [2024-11-25 12:09:08.114246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:12.427 [2024-11-25 12:09:08.249102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:12.427 [2024-11-25 12:09:08.249105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.427 [2024-11-25 12:09:08.440961] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:12.427 [2024-11-25 12:09:08.441094] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:14.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:14.331 12:09:10 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58312 /var/tmp/spdk-nbd.sock 00:10:14.331 12:09:10 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58312 ']' 00:10:14.331 12:09:10 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:14.331 12:09:10 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:14.331 12:09:10 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:14.331 12:09:10 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:14.331 12:09:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:14.331 12:09:10 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:14.331 12:09:10 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:10:14.331 12:09:10 event.app_repeat -- event/event.sh@39 -- # killprocess 58312 00:10:14.331 12:09:10 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58312 ']' 00:10:14.331 12:09:10 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58312 00:10:14.331 12:09:10 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:10:14.331 12:09:10 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:14.331 12:09:10 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58312 00:10:14.331 killing process with pid 58312 00:10:14.331 12:09:10 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:14.331 12:09:10 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:14.331 12:09:10 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58312' 00:10:14.331 12:09:10 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58312 00:10:14.331 12:09:10 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58312 00:10:15.708 spdk_app_start is called in Round 0. 00:10:15.708 Shutdown signal received, stop current app iteration 00:10:15.708 Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 reinitialization... 00:10:15.708 spdk_app_start is called in Round 1. 00:10:15.708 Shutdown signal received, stop current app iteration 00:10:15.708 Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 reinitialization... 00:10:15.708 spdk_app_start is called in Round 2. 00:10:15.708 Shutdown signal received, stop current app iteration 00:10:15.708 Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 reinitialization... 00:10:15.708 spdk_app_start is called in Round 3. 00:10:15.708 Shutdown signal received, stop current app iteration 00:10:15.708 12:09:11 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:10:15.708 12:09:11 event.app_repeat -- event/event.sh@42 -- # return 0 00:10:15.708 00:10:15.708 real 0m22.181s 00:10:15.708 user 0m49.299s 00:10:15.708 sys 0m3.102s 00:10:15.708 12:09:11 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:15.708 ************************************ 00:10:15.708 END TEST app_repeat 00:10:15.708 12:09:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:15.708 ************************************ 00:10:15.708 12:09:11 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:10:15.708 12:09:11 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:10:15.708 12:09:11 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:15.708 12:09:11 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:15.708 12:09:11 event -- common/autotest_common.sh@10 -- # set +x 00:10:15.708 ************************************ 00:10:15.708 START TEST cpu_locks 00:10:15.708 ************************************ 00:10:15.708 12:09:11 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:10:15.708 * Looking for test storage... 00:10:15.708 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:10:15.708 12:09:11 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:15.708 12:09:11 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:10:15.708 12:09:11 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:15.708 12:09:11 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:15.708 12:09:11 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:15.708 12:09:11 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:15.708 12:09:11 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:15.708 12:09:11 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:10:15.708 12:09:11 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:10:15.708 12:09:11 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:10:15.708 12:09:11 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:10:15.708 12:09:11 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:10:15.708 12:09:11 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:10:15.708 12:09:11 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:10:15.708 12:09:11 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:15.708 12:09:11 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:10:15.708 12:09:11 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:10:15.708 12:09:11 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:15.708 12:09:11 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:15.708 12:09:11 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:10:15.708 12:09:11 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:10:15.708 12:09:11 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:15.708 12:09:11 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:10:15.708 12:09:11 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:10:15.708 12:09:11 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:10:15.708 12:09:11 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:10:15.708 12:09:11 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:15.708 12:09:11 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:10:15.708 12:09:11 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:10:15.708 12:09:11 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:15.708 12:09:11 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:15.708 12:09:11 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:10:15.708 12:09:11 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:15.708 12:09:11 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:15.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.708 --rc genhtml_branch_coverage=1 00:10:15.708 --rc genhtml_function_coverage=1 00:10:15.708 --rc genhtml_legend=1 00:10:15.708 --rc geninfo_all_blocks=1 00:10:15.708 --rc geninfo_unexecuted_blocks=1 00:10:15.708 00:10:15.708 ' 00:10:15.708 12:09:11 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:15.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.708 --rc genhtml_branch_coverage=1 00:10:15.708 --rc genhtml_function_coverage=1 00:10:15.708 --rc genhtml_legend=1 00:10:15.708 --rc geninfo_all_blocks=1 00:10:15.708 --rc geninfo_unexecuted_blocks=1 00:10:15.708 00:10:15.708 ' 00:10:15.708 12:09:11 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:15.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.708 --rc genhtml_branch_coverage=1 00:10:15.708 --rc genhtml_function_coverage=1 00:10:15.708 --rc genhtml_legend=1 00:10:15.708 --rc geninfo_all_blocks=1 00:10:15.708 --rc geninfo_unexecuted_blocks=1 00:10:15.708 00:10:15.708 ' 00:10:15.708 12:09:11 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:15.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.708 --rc genhtml_branch_coverage=1 00:10:15.708 --rc genhtml_function_coverage=1 00:10:15.708 --rc genhtml_legend=1 00:10:15.708 --rc geninfo_all_blocks=1 00:10:15.708 --rc geninfo_unexecuted_blocks=1 00:10:15.708 00:10:15.708 ' 00:10:15.708 12:09:11 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:10:15.708 12:09:11 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:10:15.708 12:09:11 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:10:15.708 12:09:11 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:10:15.708 12:09:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:15.708 12:09:11 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:15.708 12:09:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:15.708 ************************************ 00:10:15.708 START TEST default_locks 00:10:15.708 ************************************ 00:10:15.708 12:09:11 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:10:15.708 12:09:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58793 00:10:15.708 12:09:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58793 00:10:15.708 12:09:11 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58793 ']' 00:10:15.708 12:09:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:15.708 12:09:11 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.708 12:09:11 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:15.708 12:09:11 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.708 12:09:11 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:15.708 12:09:11 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:15.967 [2024-11-25 12:09:11.819981] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:10:15.967 [2024-11-25 12:09:11.820159] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58793 ] 00:10:15.967 [2024-11-25 12:09:12.012214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.226 [2024-11-25 12:09:12.188568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.165 12:09:13 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:17.165 12:09:13 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:10:17.165 12:09:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58793 00:10:17.165 12:09:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58793 00:10:17.165 12:09:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:17.736 12:09:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58793 00:10:17.736 12:09:13 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58793 ']' 00:10:17.736 12:09:13 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58793 00:10:17.736 12:09:13 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:10:17.736 12:09:13 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:17.736 12:09:13 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58793 00:10:17.736 12:09:13 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:17.736 12:09:13 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:17.736 killing process with pid 58793 00:10:17.736 12:09:13 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58793' 00:10:17.736 12:09:13 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58793 00:10:17.736 12:09:13 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58793 00:10:20.297 12:09:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58793 00:10:20.297 12:09:15 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:10:20.297 12:09:15 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58793 00:10:20.297 12:09:15 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:10:20.297 12:09:15 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:20.297 12:09:15 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:10:20.297 12:09:15 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:20.297 12:09:15 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58793 00:10:20.297 12:09:15 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58793 ']' 00:10:20.297 12:09:15 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:20.297 12:09:15 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:20.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:20.297 12:09:15 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:20.297 12:09:15 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:20.297 12:09:15 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:20.297 ERROR: process (pid: 58793) is no longer running 00:10:20.297 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58793) - No such process 00:10:20.297 12:09:15 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:20.297 12:09:15 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:10:20.297 12:09:15 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:10:20.297 12:09:15 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:20.297 12:09:15 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:20.297 12:09:15 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:20.297 12:09:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:10:20.297 12:09:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:10:20.297 12:09:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:10:20.297 12:09:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:10:20.297 00:10:20.297 real 0m4.248s 00:10:20.297 user 0m4.388s 00:10:20.297 sys 0m0.776s 00:10:20.297 12:09:15 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:20.297 12:09:15 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:20.297 ************************************ 00:10:20.297 END TEST default_locks 00:10:20.297 ************************************ 00:10:20.297 12:09:15 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:10:20.297 12:09:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:20.297 12:09:15 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:20.297 12:09:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:20.297 ************************************ 00:10:20.297 START TEST default_locks_via_rpc 00:10:20.297 ************************************ 00:10:20.297 12:09:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:10:20.297 12:09:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58869 00:10:20.297 12:09:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58869 00:10:20.297 12:09:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58869 ']' 00:10:20.297 12:09:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:20.297 12:09:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:20.297 12:09:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:20.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:20.297 12:09:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:20.297 12:09:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:20.297 12:09:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:20.297 [2024-11-25 12:09:16.094119] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:10:20.297 [2024-11-25 12:09:16.094296] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58869 ] 00:10:20.297 [2024-11-25 12:09:16.274746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.556 [2024-11-25 12:09:16.410506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.493 12:09:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:21.493 12:09:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:21.493 12:09:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:10:21.493 12:09:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.493 12:09:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:21.493 12:09:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.493 12:09:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:10:21.493 12:09:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:10:21.493 12:09:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:10:21.493 12:09:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:10:21.493 12:09:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:10:21.493 12:09:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.493 12:09:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:21.493 12:09:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.493 12:09:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58869 00:10:21.494 12:09:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:21.494 12:09:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58869 00:10:22.059 12:09:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58869 00:10:22.059 12:09:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58869 ']' 00:10:22.059 12:09:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58869 00:10:22.059 12:09:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:10:22.059 12:09:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:22.059 12:09:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58869 00:10:22.059 12:09:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:22.059 12:09:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:22.059 killing process with pid 58869 00:10:22.059 12:09:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58869' 00:10:22.059 12:09:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58869 00:10:22.059 12:09:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58869 00:10:24.590 00:10:24.590 real 0m4.360s 00:10:24.590 user 0m4.445s 00:10:24.590 sys 0m0.804s 00:10:24.590 12:09:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:24.590 12:09:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:24.590 ************************************ 00:10:24.590 END TEST default_locks_via_rpc 00:10:24.590 ************************************ 00:10:24.590 12:09:20 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:10:24.590 12:09:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:24.590 12:09:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:24.590 12:09:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:24.590 ************************************ 00:10:24.590 START TEST non_locking_app_on_locked_coremask 00:10:24.590 ************************************ 00:10:24.590 12:09:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:10:24.590 12:09:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58951 00:10:24.590 12:09:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:24.590 12:09:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58951 /var/tmp/spdk.sock 00:10:24.590 12:09:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58951 ']' 00:10:24.590 12:09:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:24.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:24.590 12:09:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:24.590 12:09:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:24.590 12:09:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:24.590 12:09:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:24.590 [2024-11-25 12:09:20.541051] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:10:24.590 [2024-11-25 12:09:20.541231] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58951 ] 00:10:24.849 [2024-11-25 12:09:20.731946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.849 [2024-11-25 12:09:20.868194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.786 12:09:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:25.786 12:09:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:25.786 12:09:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:10:25.786 12:09:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58967 00:10:25.786 12:09:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58967 /var/tmp/spdk2.sock 00:10:25.786 12:09:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58967 ']' 00:10:25.786 12:09:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:25.786 12:09:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:25.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:25.786 12:09:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:25.786 12:09:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:25.786 12:09:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:25.786 [2024-11-25 12:09:21.862686] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:10:25.786 [2024-11-25 12:09:21.862860] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58967 ] 00:10:26.045 [2024-11-25 12:09:22.057942] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:26.045 [2024-11-25 12:09:22.058038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.309 [2024-11-25 12:09:22.350142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.892 12:09:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:28.892 12:09:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:28.892 12:09:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58951 00:10:28.893 12:09:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:28.893 12:09:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58951 00:10:29.460 12:09:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58951 00:10:29.460 12:09:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58951 ']' 00:10:29.460 12:09:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58951 00:10:29.460 12:09:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:29.460 12:09:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:29.460 12:09:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58951 00:10:29.460 12:09:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:29.460 12:09:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:29.460 killing process with pid 58951 00:10:29.460 12:09:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58951' 00:10:29.460 12:09:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58951 00:10:29.460 12:09:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58951 00:10:34.731 12:09:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58967 00:10:34.731 12:09:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58967 ']' 00:10:34.731 12:09:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58967 00:10:34.731 12:09:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:34.731 12:09:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:34.731 12:09:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58967 00:10:34.731 12:09:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:34.731 12:09:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:34.731 killing process with pid 58967 00:10:34.731 12:09:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58967' 00:10:34.731 12:09:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58967 00:10:34.731 12:09:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58967 00:10:36.717 00:10:36.717 real 0m11.990s 00:10:36.717 user 0m12.624s 00:10:36.717 sys 0m1.540s 00:10:36.717 12:09:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:36.717 12:09:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:36.717 ************************************ 00:10:36.717 END TEST non_locking_app_on_locked_coremask 00:10:36.717 ************************************ 00:10:36.717 12:09:32 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:10:36.717 12:09:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:36.717 12:09:32 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:36.717 12:09:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:36.717 ************************************ 00:10:36.717 START TEST locking_app_on_unlocked_coremask 00:10:36.717 ************************************ 00:10:36.717 12:09:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:10:36.717 12:09:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59123 00:10:36.717 12:09:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:10:36.717 12:09:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59123 /var/tmp/spdk.sock 00:10:36.717 12:09:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59123 ']' 00:10:36.717 12:09:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:36.717 12:09:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:36.717 12:09:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:36.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:36.717 12:09:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:36.717 12:09:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:36.717 [2024-11-25 12:09:32.557653] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:10:36.717 [2024-11-25 12:09:32.557815] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59123 ] 00:10:36.717 [2024-11-25 12:09:32.730932] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:36.717 [2024-11-25 12:09:32.731044] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.975 [2024-11-25 12:09:32.917717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.348 12:09:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:38.348 12:09:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:38.348 12:09:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59145 00:10:38.348 12:09:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:38.348 12:09:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59145 /var/tmp/spdk2.sock 00:10:38.348 12:09:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59145 ']' 00:10:38.348 12:09:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:38.348 12:09:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:38.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:38.348 12:09:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:38.348 12:09:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:38.348 12:09:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:38.348 [2024-11-25 12:09:34.199465] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:10:38.348 [2024-11-25 12:09:34.199689] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59145 ] 00:10:38.348 [2024-11-25 12:09:34.418094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:38.914 [2024-11-25 12:09:34.781073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.445 12:09:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:41.445 12:09:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:41.445 12:09:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59145 00:10:41.445 12:09:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59145 00:10:41.445 12:09:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:42.012 12:09:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59123 00:10:42.012 12:09:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59123 ']' 00:10:42.012 12:09:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59123 00:10:42.012 12:09:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:42.012 12:09:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:42.012 12:09:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59123 00:10:42.012 12:09:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:42.012 12:09:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:42.012 killing process with pid 59123 00:10:42.012 12:09:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59123' 00:10:42.012 12:09:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59123 00:10:42.012 12:09:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59123 00:10:47.275 12:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59145 00:10:47.275 12:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59145 ']' 00:10:47.275 12:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59145 00:10:47.275 12:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:47.275 12:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:47.275 12:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59145 00:10:47.275 12:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:47.275 killing process with pid 59145 00:10:47.275 12:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:47.275 12:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59145' 00:10:47.275 12:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59145 00:10:47.275 12:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59145 00:10:48.655 00:10:48.655 real 0m12.177s 00:10:48.655 user 0m12.881s 00:10:48.655 sys 0m1.562s 00:10:48.655 12:09:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:48.655 12:09:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:48.655 ************************************ 00:10:48.655 END TEST locking_app_on_unlocked_coremask 00:10:48.655 ************************************ 00:10:48.655 12:09:44 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:10:48.655 12:09:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:48.655 12:09:44 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:48.655 12:09:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:48.655 ************************************ 00:10:48.655 START TEST locking_app_on_locked_coremask 00:10:48.655 ************************************ 00:10:48.655 12:09:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:10:48.655 12:09:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59293 00:10:48.655 12:09:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59293 /var/tmp/spdk.sock 00:10:48.655 12:09:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59293 ']' 00:10:48.655 12:09:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.655 12:09:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:48.655 12:09:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.655 12:09:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:48.655 12:09:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:48.655 12:09:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:48.913 [2024-11-25 12:09:44.801757] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:10:48.914 [2024-11-25 12:09:44.802012] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59293 ] 00:10:48.914 [2024-11-25 12:09:44.991185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.172 [2024-11-25 12:09:45.130851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.110 12:09:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:50.110 12:09:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:50.110 12:09:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59315 00:10:50.110 12:09:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:50.110 12:09:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59315 /var/tmp/spdk2.sock 00:10:50.110 12:09:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:10:50.110 12:09:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59315 /var/tmp/spdk2.sock 00:10:50.110 12:09:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:10:50.110 12:09:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:50.110 12:09:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:10:50.110 12:09:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:50.110 12:09:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59315 /var/tmp/spdk2.sock 00:10:50.110 12:09:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59315 ']' 00:10:50.110 12:09:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:50.110 12:09:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:50.110 12:09:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:50.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:50.110 12:09:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:50.110 12:09:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:50.110 [2024-11-25 12:09:46.128550] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:10:50.110 [2024-11-25 12:09:46.128775] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59315 ] 00:10:50.369 [2024-11-25 12:09:46.332573] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59293 has claimed it. 00:10:50.369 [2024-11-25 12:09:46.332651] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:50.937 ERROR: process (pid: 59315) is no longer running 00:10:50.937 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59315) - No such process 00:10:50.937 12:09:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:50.937 12:09:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:10:50.937 12:09:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:10:50.937 12:09:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:50.937 12:09:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:50.937 12:09:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:50.937 12:09:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59293 00:10:50.937 12:09:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59293 00:10:50.937 12:09:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:51.197 12:09:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59293 00:10:51.197 12:09:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59293 ']' 00:10:51.197 12:09:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59293 00:10:51.197 12:09:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:51.197 12:09:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:51.197 12:09:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59293 00:10:51.455 12:09:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:51.455 12:09:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:51.455 killing process with pid 59293 00:10:51.455 12:09:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59293' 00:10:51.455 12:09:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59293 00:10:51.455 12:09:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59293 00:10:54.008 00:10:54.008 real 0m4.840s 00:10:54.008 user 0m5.303s 00:10:54.008 sys 0m0.908s 00:10:54.008 12:09:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:54.008 12:09:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:54.008 ************************************ 00:10:54.008 END TEST locking_app_on_locked_coremask 00:10:54.008 ************************************ 00:10:54.008 12:09:49 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:10:54.008 12:09:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:54.008 12:09:49 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:54.008 12:09:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:54.008 ************************************ 00:10:54.008 START TEST locking_overlapped_coremask 00:10:54.008 ************************************ 00:10:54.008 12:09:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:10:54.008 12:09:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59384 00:10:54.008 12:09:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:10:54.008 12:09:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59384 /var/tmp/spdk.sock 00:10:54.008 12:09:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59384 ']' 00:10:54.008 12:09:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:54.008 12:09:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:54.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:54.008 12:09:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:54.008 12:09:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:54.008 12:09:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:54.008 [2024-11-25 12:09:49.668737] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:10:54.008 [2024-11-25 12:09:49.668912] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59384 ] 00:10:54.008 [2024-11-25 12:09:49.842296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:54.008 [2024-11-25 12:09:49.978614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:54.008 [2024-11-25 12:09:49.978687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.008 [2024-11-25 12:09:49.978705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:54.945 12:09:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:54.945 12:09:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:54.945 12:09:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59408 00:10:54.945 12:09:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59408 /var/tmp/spdk2.sock 00:10:54.945 12:09:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:10:54.945 12:09:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59408 /var/tmp/spdk2.sock 00:10:54.945 12:09:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:10:54.945 12:09:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:10:54.945 12:09:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:54.945 12:09:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:10:54.945 12:09:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:54.945 12:09:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59408 /var/tmp/spdk2.sock 00:10:54.945 12:09:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59408 ']' 00:10:54.945 12:09:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:54.945 12:09:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:54.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:54.945 12:09:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:54.945 12:09:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:54.945 12:09:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:54.945 [2024-11-25 12:09:50.976744] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:10:54.945 [2024-11-25 12:09:50.976928] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59408 ] 00:10:55.204 [2024-11-25 12:09:51.187501] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59384 has claimed it. 00:10:55.204 [2024-11-25 12:09:51.187824] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:55.772 ERROR: process (pid: 59408) is no longer running 00:10:55.772 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59408) - No such process 00:10:55.772 12:09:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:55.772 12:09:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:10:55.772 12:09:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:10:55.772 12:09:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:55.772 12:09:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:55.772 12:09:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:55.772 12:09:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:10:55.772 12:09:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:55.772 12:09:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:55.772 12:09:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:55.772 12:09:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59384 00:10:55.772 12:09:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59384 ']' 00:10:55.772 12:09:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59384 00:10:55.772 12:09:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:10:55.772 12:09:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:55.772 12:09:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59384 00:10:55.772 12:09:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:55.772 12:09:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:55.772 12:09:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59384' 00:10:55.772 killing process with pid 59384 00:10:55.772 12:09:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59384 00:10:55.772 12:09:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59384 00:10:58.345 00:10:58.345 real 0m4.394s 00:10:58.345 user 0m11.991s 00:10:58.345 sys 0m0.689s 00:10:58.345 12:09:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:58.345 12:09:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:58.345 ************************************ 00:10:58.345 END TEST locking_overlapped_coremask 00:10:58.345 ************************************ 00:10:58.345 12:09:53 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:10:58.345 12:09:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:58.345 12:09:53 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:58.345 12:09:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:58.345 ************************************ 00:10:58.345 START TEST locking_overlapped_coremask_via_rpc 00:10:58.345 ************************************ 00:10:58.345 12:09:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:10:58.345 12:09:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59472 00:10:58.345 12:09:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59472 /var/tmp/spdk.sock 00:10:58.345 12:09:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:10:58.345 12:09:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59472 ']' 00:10:58.345 12:09:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:58.345 12:09:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:58.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:58.345 12:09:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:58.345 12:09:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:58.345 12:09:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:58.345 [2024-11-25 12:09:54.119895] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:10:58.345 [2024-11-25 12:09:54.120569] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59472 ] 00:10:58.345 [2024-11-25 12:09:54.296796] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:58.345 [2024-11-25 12:09:54.296861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:58.603 [2024-11-25 12:09:54.442530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:58.603 [2024-11-25 12:09:54.443888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:58.603 [2024-11-25 12:09:54.444016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.539 12:09:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:59.539 12:09:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:59.539 12:09:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59490 00:10:59.539 12:09:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:10:59.539 12:09:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59490 /var/tmp/spdk2.sock 00:10:59.539 12:09:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59490 ']' 00:10:59.539 12:09:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:59.539 12:09:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:59.539 12:09:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:59.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:59.539 12:09:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:59.539 12:09:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.539 [2024-11-25 12:09:55.513805] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:10:59.539 [2024-11-25 12:09:55.514678] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59490 ] 00:10:59.798 [2024-11-25 12:09:55.718402] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:59.798 [2024-11-25 12:09:55.718477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:00.056 [2024-11-25 12:09:55.992154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:00.056 [2024-11-25 12:09:55.992256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:00.056 [2024-11-25 12:09:55.992266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:02.592 12:09:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:02.592 12:09:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:02.592 12:09:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:11:02.592 12:09:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.592 12:09:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.592 12:09:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.592 12:09:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:02.592 12:09:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:02.592 12:09:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:02.592 12:09:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:02.592 12:09:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:02.592 12:09:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:02.592 12:09:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:02.592 12:09:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:02.592 12:09:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.592 12:09:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.592 [2024-11-25 12:09:58.319537] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59472 has claimed it. 00:11:02.592 request: 00:11:02.592 { 00:11:02.592 "method": "framework_enable_cpumask_locks", 00:11:02.592 "req_id": 1 00:11:02.592 } 00:11:02.592 Got JSON-RPC error response 00:11:02.592 response: 00:11:02.592 { 00:11:02.592 "code": -32603, 00:11:02.592 "message": "Failed to claim CPU core: 2" 00:11:02.592 } 00:11:02.592 12:09:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:02.592 12:09:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:02.592 12:09:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:02.592 12:09:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:02.592 12:09:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:02.592 12:09:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59472 /var/tmp/spdk.sock 00:11:02.592 12:09:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59472 ']' 00:11:02.592 12:09:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.592 12:09:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:02.592 12:09:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:02.592 12:09:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:02.592 12:09:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.592 12:09:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:02.592 12:09:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:02.592 12:09:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59490 /var/tmp/spdk2.sock 00:11:02.592 12:09:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59490 ']' 00:11:02.592 12:09:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:02.592 12:09:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:02.592 12:09:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:02.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:02.592 12:09:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:02.592 12:09:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.851 12:09:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:02.851 12:09:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:02.851 12:09:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:11:02.851 ************************************ 00:11:02.851 END TEST locking_overlapped_coremask_via_rpc 00:11:02.851 ************************************ 00:11:02.851 12:09:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:11:02.851 12:09:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:11:02.851 12:09:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:11:02.851 00:11:02.851 real 0m4.906s 00:11:02.851 user 0m1.886s 00:11:02.852 sys 0m0.224s 00:11:02.852 12:09:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:02.852 12:09:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:03.110 12:09:58 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:11:03.110 12:09:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59472 ]] 00:11:03.110 12:09:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59472 00:11:03.110 12:09:58 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59472 ']' 00:11:03.110 12:09:58 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59472 00:11:03.110 12:09:58 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:11:03.110 12:09:58 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:03.110 12:09:58 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59472 00:11:03.110 killing process with pid 59472 00:11:03.110 12:09:58 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:03.110 12:09:58 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:03.110 12:09:58 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59472' 00:11:03.110 12:09:58 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59472 00:11:03.110 12:09:58 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59472 00:11:05.640 12:10:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59490 ]] 00:11:05.640 12:10:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59490 00:11:05.640 12:10:01 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59490 ']' 00:11:05.640 12:10:01 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59490 00:11:05.640 12:10:01 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:11:05.640 12:10:01 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:05.640 12:10:01 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59490 00:11:05.640 killing process with pid 59490 00:11:05.640 12:10:01 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:11:05.640 12:10:01 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:11:05.640 12:10:01 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59490' 00:11:05.640 12:10:01 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59490 00:11:05.640 12:10:01 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59490 00:11:07.540 12:10:03 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:11:07.540 Process with pid 59472 is not found 00:11:07.540 12:10:03 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:11:07.540 12:10:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59472 ]] 00:11:07.540 12:10:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59472 00:11:07.540 12:10:03 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59472 ']' 00:11:07.540 12:10:03 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59472 00:11:07.540 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59472) - No such process 00:11:07.540 12:10:03 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59472 is not found' 00:11:07.540 12:10:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59490 ]] 00:11:07.540 12:10:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59490 00:11:07.540 12:10:03 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59490 ']' 00:11:07.540 12:10:03 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59490 00:11:07.540 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59490) - No such process 00:11:07.540 Process with pid 59490 is not found 00:11:07.540 12:10:03 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59490 is not found' 00:11:07.540 12:10:03 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:11:07.540 00:11:07.540 real 0m52.029s 00:11:07.540 user 1m29.949s 00:11:07.540 sys 0m7.744s 00:11:07.540 12:10:03 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:07.540 ************************************ 00:11:07.540 END TEST cpu_locks 00:11:07.540 ************************************ 00:11:07.540 12:10:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:07.540 00:11:07.541 real 1m25.341s 00:11:07.541 user 2m36.751s 00:11:07.541 sys 0m11.938s 00:11:07.541 12:10:03 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:07.541 12:10:03 event -- common/autotest_common.sh@10 -- # set +x 00:11:07.541 ************************************ 00:11:07.541 END TEST event 00:11:07.541 ************************************ 00:11:07.541 12:10:03 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:11:07.541 12:10:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:07.541 12:10:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:07.541 12:10:03 -- common/autotest_common.sh@10 -- # set +x 00:11:07.541 ************************************ 00:11:07.541 START TEST thread 00:11:07.541 ************************************ 00:11:07.541 12:10:03 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:11:07.800 * Looking for test storage... 00:11:07.800 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:11:07.800 12:10:03 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:07.800 12:10:03 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:11:07.800 12:10:03 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:07.800 12:10:03 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:07.800 12:10:03 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:07.800 12:10:03 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:07.800 12:10:03 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:07.800 12:10:03 thread -- scripts/common.sh@336 -- # IFS=.-: 00:11:07.800 12:10:03 thread -- scripts/common.sh@336 -- # read -ra ver1 00:11:07.800 12:10:03 thread -- scripts/common.sh@337 -- # IFS=.-: 00:11:07.800 12:10:03 thread -- scripts/common.sh@337 -- # read -ra ver2 00:11:07.800 12:10:03 thread -- scripts/common.sh@338 -- # local 'op=<' 00:11:07.800 12:10:03 thread -- scripts/common.sh@340 -- # ver1_l=2 00:11:07.800 12:10:03 thread -- scripts/common.sh@341 -- # ver2_l=1 00:11:07.800 12:10:03 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:07.800 12:10:03 thread -- scripts/common.sh@344 -- # case "$op" in 00:11:07.800 12:10:03 thread -- scripts/common.sh@345 -- # : 1 00:11:07.800 12:10:03 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:07.800 12:10:03 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:07.800 12:10:03 thread -- scripts/common.sh@365 -- # decimal 1 00:11:07.800 12:10:03 thread -- scripts/common.sh@353 -- # local d=1 00:11:07.800 12:10:03 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:07.800 12:10:03 thread -- scripts/common.sh@355 -- # echo 1 00:11:07.800 12:10:03 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:11:07.800 12:10:03 thread -- scripts/common.sh@366 -- # decimal 2 00:11:07.800 12:10:03 thread -- scripts/common.sh@353 -- # local d=2 00:11:07.800 12:10:03 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:07.800 12:10:03 thread -- scripts/common.sh@355 -- # echo 2 00:11:07.800 12:10:03 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:11:07.800 12:10:03 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:07.800 12:10:03 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:07.800 12:10:03 thread -- scripts/common.sh@368 -- # return 0 00:11:07.800 12:10:03 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:07.800 12:10:03 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:07.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.800 --rc genhtml_branch_coverage=1 00:11:07.800 --rc genhtml_function_coverage=1 00:11:07.800 --rc genhtml_legend=1 00:11:07.800 --rc geninfo_all_blocks=1 00:11:07.800 --rc geninfo_unexecuted_blocks=1 00:11:07.800 00:11:07.800 ' 00:11:07.800 12:10:03 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:07.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.800 --rc genhtml_branch_coverage=1 00:11:07.800 --rc genhtml_function_coverage=1 00:11:07.800 --rc genhtml_legend=1 00:11:07.800 --rc geninfo_all_blocks=1 00:11:07.800 --rc geninfo_unexecuted_blocks=1 00:11:07.800 00:11:07.800 ' 00:11:07.800 12:10:03 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:07.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.800 --rc genhtml_branch_coverage=1 00:11:07.800 --rc genhtml_function_coverage=1 00:11:07.800 --rc genhtml_legend=1 00:11:07.800 --rc geninfo_all_blocks=1 00:11:07.800 --rc geninfo_unexecuted_blocks=1 00:11:07.800 00:11:07.800 ' 00:11:07.800 12:10:03 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:07.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.800 --rc genhtml_branch_coverage=1 00:11:07.800 --rc genhtml_function_coverage=1 00:11:07.800 --rc genhtml_legend=1 00:11:07.800 --rc geninfo_all_blocks=1 00:11:07.800 --rc geninfo_unexecuted_blocks=1 00:11:07.800 00:11:07.800 ' 00:11:07.800 12:10:03 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:11:07.800 12:10:03 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:11:07.800 12:10:03 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:07.800 12:10:03 thread -- common/autotest_common.sh@10 -- # set +x 00:11:07.800 ************************************ 00:11:07.800 START TEST thread_poller_perf 00:11:07.800 ************************************ 00:11:07.800 12:10:03 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:11:07.800 [2024-11-25 12:10:03.827036] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:11:07.800 [2024-11-25 12:10:03.827403] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59685 ] 00:11:08.059 [2024-11-25 12:10:04.012882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:08.059 [2024-11-25 12:10:04.145095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.059 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:11:09.433 [2024-11-25T12:10:05.524Z] ====================================== 00:11:09.433 [2024-11-25T12:10:05.524Z] busy:2213425013 (cyc) 00:11:09.433 [2024-11-25T12:10:05.524Z] total_run_count: 301000 00:11:09.433 [2024-11-25T12:10:05.524Z] tsc_hz: 2200000000 (cyc) 00:11:09.433 [2024-11-25T12:10:05.524Z] ====================================== 00:11:09.433 [2024-11-25T12:10:05.524Z] poller_cost: 7353 (cyc), 3342 (nsec) 00:11:09.433 00:11:09.433 real 0m1.616s 00:11:09.433 user 0m1.400s 00:11:09.433 ************************************ 00:11:09.433 END TEST thread_poller_perf 00:11:09.433 ************************************ 00:11:09.433 sys 0m0.105s 00:11:09.433 12:10:05 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:09.433 12:10:05 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:11:09.433 12:10:05 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:11:09.433 12:10:05 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:11:09.433 12:10:05 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:09.433 12:10:05 thread -- common/autotest_common.sh@10 -- # set +x 00:11:09.433 ************************************ 00:11:09.433 START TEST thread_poller_perf 00:11:09.433 ************************************ 00:11:09.433 12:10:05 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:11:09.433 [2024-11-25 12:10:05.495175] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:11:09.433 [2024-11-25 12:10:05.495386] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59727 ] 00:11:09.691 [2024-11-25 12:10:05.686328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.949 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:11:09.949 [2024-11-25 12:10:05.826124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.326 [2024-11-25T12:10:07.417Z] ====================================== 00:11:11.326 [2024-11-25T12:10:07.417Z] busy:2205229155 (cyc) 00:11:11.326 [2024-11-25T12:10:07.417Z] total_run_count: 3772000 00:11:11.326 [2024-11-25T12:10:07.417Z] tsc_hz: 2200000000 (cyc) 00:11:11.326 [2024-11-25T12:10:07.417Z] ====================================== 00:11:11.326 [2024-11-25T12:10:07.417Z] poller_cost: 584 (cyc), 265 (nsec) 00:11:11.326 ************************************ 00:11:11.326 END TEST thread_poller_perf 00:11:11.326 ************************************ 00:11:11.326 00:11:11.326 real 0m1.615s 00:11:11.326 user 0m1.401s 00:11:11.326 sys 0m0.104s 00:11:11.326 12:10:07 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:11.326 12:10:07 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:11:11.326 12:10:07 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:11:11.326 ************************************ 00:11:11.326 END TEST thread 00:11:11.326 ************************************ 00:11:11.326 00:11:11.326 real 0m3.503s 00:11:11.326 user 0m2.937s 00:11:11.326 sys 0m0.346s 00:11:11.326 12:10:07 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:11.326 12:10:07 thread -- common/autotest_common.sh@10 -- # set +x 00:11:11.326 12:10:07 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:11:11.326 12:10:07 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:11:11.326 12:10:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:11.326 12:10:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:11.326 12:10:07 -- common/autotest_common.sh@10 -- # set +x 00:11:11.326 ************************************ 00:11:11.327 START TEST app_cmdline 00:11:11.327 ************************************ 00:11:11.327 12:10:07 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:11:11.327 * Looking for test storage... 00:11:11.327 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:11:11.327 12:10:07 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:11.327 12:10:07 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:11:11.327 12:10:07 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:11.327 12:10:07 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:11.327 12:10:07 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:11.327 12:10:07 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:11.327 12:10:07 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:11.327 12:10:07 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:11:11.327 12:10:07 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:11:11.327 12:10:07 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:11:11.327 12:10:07 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:11:11.327 12:10:07 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:11:11.327 12:10:07 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:11:11.327 12:10:07 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:11:11.327 12:10:07 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:11.327 12:10:07 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:11:11.327 12:10:07 app_cmdline -- scripts/common.sh@345 -- # : 1 00:11:11.327 12:10:07 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:11.327 12:10:07 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:11.327 12:10:07 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:11:11.327 12:10:07 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:11:11.327 12:10:07 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:11.327 12:10:07 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:11:11.327 12:10:07 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:11:11.327 12:10:07 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:11:11.327 12:10:07 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:11:11.327 12:10:07 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:11.327 12:10:07 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:11:11.327 12:10:07 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:11:11.327 12:10:07 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:11.327 12:10:07 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:11.327 12:10:07 app_cmdline -- scripts/common.sh@368 -- # return 0 00:11:11.327 12:10:07 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:11.327 12:10:07 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:11.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.327 --rc genhtml_branch_coverage=1 00:11:11.327 --rc genhtml_function_coverage=1 00:11:11.327 --rc genhtml_legend=1 00:11:11.327 --rc geninfo_all_blocks=1 00:11:11.327 --rc geninfo_unexecuted_blocks=1 00:11:11.327 00:11:11.327 ' 00:11:11.327 12:10:07 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:11.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.327 --rc genhtml_branch_coverage=1 00:11:11.327 --rc genhtml_function_coverage=1 00:11:11.327 --rc genhtml_legend=1 00:11:11.327 --rc geninfo_all_blocks=1 00:11:11.327 --rc geninfo_unexecuted_blocks=1 00:11:11.327 00:11:11.327 ' 00:11:11.327 12:10:07 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:11.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.327 --rc genhtml_branch_coverage=1 00:11:11.327 --rc genhtml_function_coverage=1 00:11:11.327 --rc genhtml_legend=1 00:11:11.327 --rc geninfo_all_blocks=1 00:11:11.327 --rc geninfo_unexecuted_blocks=1 00:11:11.327 00:11:11.327 ' 00:11:11.327 12:10:07 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:11.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.327 --rc genhtml_branch_coverage=1 00:11:11.327 --rc genhtml_function_coverage=1 00:11:11.327 --rc genhtml_legend=1 00:11:11.327 --rc geninfo_all_blocks=1 00:11:11.327 --rc geninfo_unexecuted_blocks=1 00:11:11.327 00:11:11.327 ' 00:11:11.327 12:10:07 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:11:11.327 12:10:07 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59815 00:11:11.327 12:10:07 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:11:11.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:11.327 12:10:07 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59815 00:11:11.327 12:10:07 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59815 ']' 00:11:11.327 12:10:07 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:11.327 12:10:07 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:11.327 12:10:07 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:11.327 12:10:07 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:11.327 12:10:07 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:11.585 [2024-11-25 12:10:07.445772] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:11:11.585 [2024-11-25 12:10:07.446190] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59815 ] 00:11:11.585 [2024-11-25 12:10:07.636005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.843 [2024-11-25 12:10:07.766455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.780 12:10:08 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:12.780 12:10:08 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:11:12.780 12:10:08 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:11:13.038 { 00:11:13.039 "version": "SPDK v25.01-pre git sha1 f1dd81af3", 00:11:13.039 "fields": { 00:11:13.039 "major": 25, 00:11:13.039 "minor": 1, 00:11:13.039 "patch": 0, 00:11:13.039 "suffix": "-pre", 00:11:13.039 "commit": "f1dd81af3" 00:11:13.039 } 00:11:13.039 } 00:11:13.039 12:10:08 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:11:13.039 12:10:08 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:11:13.039 12:10:08 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:11:13.039 12:10:08 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:11:13.039 12:10:08 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:11:13.039 12:10:08 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:11:13.039 12:10:08 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.039 12:10:08 app_cmdline -- app/cmdline.sh@26 -- # sort 00:11:13.039 12:10:08 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:13.039 12:10:08 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.039 12:10:08 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:11:13.039 12:10:08 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:11:13.039 12:10:08 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:13.039 12:10:08 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:11:13.039 12:10:08 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:13.039 12:10:08 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:13.039 12:10:08 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:13.039 12:10:08 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:13.039 12:10:08 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:13.039 12:10:08 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:13.039 12:10:08 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:13.039 12:10:08 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:13.039 12:10:08 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:13.039 12:10:08 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:13.297 request: 00:11:13.297 { 00:11:13.297 "method": "env_dpdk_get_mem_stats", 00:11:13.297 "req_id": 1 00:11:13.297 } 00:11:13.297 Got JSON-RPC error response 00:11:13.297 response: 00:11:13.297 { 00:11:13.297 "code": -32601, 00:11:13.297 "message": "Method not found" 00:11:13.297 } 00:11:13.297 12:10:09 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:11:13.297 12:10:09 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:13.297 12:10:09 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:13.297 12:10:09 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:13.297 12:10:09 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59815 00:11:13.297 12:10:09 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59815 ']' 00:11:13.297 12:10:09 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59815 00:11:13.297 12:10:09 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:11:13.297 12:10:09 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:13.297 12:10:09 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59815 00:11:13.297 killing process with pid 59815 00:11:13.297 12:10:09 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:13.297 12:10:09 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:13.297 12:10:09 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59815' 00:11:13.297 12:10:09 app_cmdline -- common/autotest_common.sh@973 -- # kill 59815 00:11:13.297 12:10:09 app_cmdline -- common/autotest_common.sh@978 -- # wait 59815 00:11:15.829 ************************************ 00:11:15.829 END TEST app_cmdline 00:11:15.830 ************************************ 00:11:15.830 00:11:15.830 real 0m4.400s 00:11:15.830 user 0m4.841s 00:11:15.830 sys 0m0.690s 00:11:15.830 12:10:11 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:15.830 12:10:11 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:15.830 12:10:11 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:11:15.830 12:10:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:15.830 12:10:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:15.830 12:10:11 -- common/autotest_common.sh@10 -- # set +x 00:11:15.830 ************************************ 00:11:15.830 START TEST version 00:11:15.830 ************************************ 00:11:15.830 12:10:11 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:11:15.830 * Looking for test storage... 00:11:15.830 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:11:15.830 12:10:11 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:15.830 12:10:11 version -- common/autotest_common.sh@1693 -- # lcov --version 00:11:15.830 12:10:11 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:15.830 12:10:11 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:15.830 12:10:11 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:15.830 12:10:11 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:15.830 12:10:11 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:15.830 12:10:11 version -- scripts/common.sh@336 -- # IFS=.-: 00:11:15.830 12:10:11 version -- scripts/common.sh@336 -- # read -ra ver1 00:11:15.830 12:10:11 version -- scripts/common.sh@337 -- # IFS=.-: 00:11:15.830 12:10:11 version -- scripts/common.sh@337 -- # read -ra ver2 00:11:15.830 12:10:11 version -- scripts/common.sh@338 -- # local 'op=<' 00:11:15.830 12:10:11 version -- scripts/common.sh@340 -- # ver1_l=2 00:11:15.830 12:10:11 version -- scripts/common.sh@341 -- # ver2_l=1 00:11:15.830 12:10:11 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:15.830 12:10:11 version -- scripts/common.sh@344 -- # case "$op" in 00:11:15.830 12:10:11 version -- scripts/common.sh@345 -- # : 1 00:11:15.830 12:10:11 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:15.830 12:10:11 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:15.830 12:10:11 version -- scripts/common.sh@365 -- # decimal 1 00:11:15.830 12:10:11 version -- scripts/common.sh@353 -- # local d=1 00:11:15.830 12:10:11 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:15.830 12:10:11 version -- scripts/common.sh@355 -- # echo 1 00:11:15.830 12:10:11 version -- scripts/common.sh@365 -- # ver1[v]=1 00:11:15.830 12:10:11 version -- scripts/common.sh@366 -- # decimal 2 00:11:15.830 12:10:11 version -- scripts/common.sh@353 -- # local d=2 00:11:15.830 12:10:11 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:15.830 12:10:11 version -- scripts/common.sh@355 -- # echo 2 00:11:15.830 12:10:11 version -- scripts/common.sh@366 -- # ver2[v]=2 00:11:15.830 12:10:11 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:15.830 12:10:11 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:15.830 12:10:11 version -- scripts/common.sh@368 -- # return 0 00:11:15.830 12:10:11 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:15.830 12:10:11 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:15.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.830 --rc genhtml_branch_coverage=1 00:11:15.830 --rc genhtml_function_coverage=1 00:11:15.830 --rc genhtml_legend=1 00:11:15.830 --rc geninfo_all_blocks=1 00:11:15.830 --rc geninfo_unexecuted_blocks=1 00:11:15.830 00:11:15.830 ' 00:11:15.830 12:10:11 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:15.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.830 --rc genhtml_branch_coverage=1 00:11:15.830 --rc genhtml_function_coverage=1 00:11:15.830 --rc genhtml_legend=1 00:11:15.830 --rc geninfo_all_blocks=1 00:11:15.830 --rc geninfo_unexecuted_blocks=1 00:11:15.830 00:11:15.830 ' 00:11:15.830 12:10:11 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:15.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.830 --rc genhtml_branch_coverage=1 00:11:15.830 --rc genhtml_function_coverage=1 00:11:15.830 --rc genhtml_legend=1 00:11:15.830 --rc geninfo_all_blocks=1 00:11:15.830 --rc geninfo_unexecuted_blocks=1 00:11:15.830 00:11:15.830 ' 00:11:15.830 12:10:11 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:15.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.830 --rc genhtml_branch_coverage=1 00:11:15.830 --rc genhtml_function_coverage=1 00:11:15.830 --rc genhtml_legend=1 00:11:15.830 --rc geninfo_all_blocks=1 00:11:15.830 --rc geninfo_unexecuted_blocks=1 00:11:15.830 00:11:15.830 ' 00:11:15.830 12:10:11 version -- app/version.sh@17 -- # get_header_version major 00:11:15.830 12:10:11 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:15.830 12:10:11 version -- app/version.sh@14 -- # cut -f2 00:11:15.830 12:10:11 version -- app/version.sh@14 -- # tr -d '"' 00:11:15.830 12:10:11 version -- app/version.sh@17 -- # major=25 00:11:15.830 12:10:11 version -- app/version.sh@18 -- # get_header_version minor 00:11:15.830 12:10:11 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:15.830 12:10:11 version -- app/version.sh@14 -- # cut -f2 00:11:15.830 12:10:11 version -- app/version.sh@14 -- # tr -d '"' 00:11:15.830 12:10:11 version -- app/version.sh@18 -- # minor=1 00:11:15.830 12:10:11 version -- app/version.sh@19 -- # get_header_version patch 00:11:15.830 12:10:11 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:15.830 12:10:11 version -- app/version.sh@14 -- # cut -f2 00:11:15.830 12:10:11 version -- app/version.sh@14 -- # tr -d '"' 00:11:15.830 12:10:11 version -- app/version.sh@19 -- # patch=0 00:11:15.830 12:10:11 version -- app/version.sh@20 -- # get_header_version suffix 00:11:15.830 12:10:11 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:15.830 12:10:11 version -- app/version.sh@14 -- # cut -f2 00:11:15.830 12:10:11 version -- app/version.sh@14 -- # tr -d '"' 00:11:15.830 12:10:11 version -- app/version.sh@20 -- # suffix=-pre 00:11:15.830 12:10:11 version -- app/version.sh@22 -- # version=25.1 00:11:15.830 12:10:11 version -- app/version.sh@25 -- # (( patch != 0 )) 00:11:15.830 12:10:11 version -- app/version.sh@28 -- # version=25.1rc0 00:11:15.830 12:10:11 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:11:15.830 12:10:11 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:11:15.830 12:10:11 version -- app/version.sh@30 -- # py_version=25.1rc0 00:11:15.830 12:10:11 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:11:15.830 00:11:15.830 real 0m0.241s 00:11:15.830 user 0m0.171s 00:11:15.830 sys 0m0.110s 00:11:15.830 ************************************ 00:11:15.830 END TEST version 00:11:15.830 ************************************ 00:11:15.830 12:10:11 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:15.830 12:10:11 version -- common/autotest_common.sh@10 -- # set +x 00:11:15.830 12:10:11 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:11:15.830 12:10:11 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:11:15.830 12:10:11 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:11:15.830 12:10:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:15.830 12:10:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:15.830 12:10:11 -- common/autotest_common.sh@10 -- # set +x 00:11:15.830 ************************************ 00:11:15.830 START TEST bdev_raid 00:11:15.830 ************************************ 00:11:15.830 12:10:11 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:11:16.089 * Looking for test storage... 00:11:16.089 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:11:16.089 12:10:11 bdev_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:16.089 12:10:11 bdev_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:11:16.089 12:10:11 bdev_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:16.089 12:10:12 bdev_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:16.089 12:10:12 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:16.089 12:10:12 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:16.089 12:10:12 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:16.089 12:10:12 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:11:16.089 12:10:12 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:11:16.089 12:10:12 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:11:16.089 12:10:12 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:11:16.089 12:10:12 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:11:16.089 12:10:12 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:11:16.089 12:10:12 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:11:16.089 12:10:12 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:16.089 12:10:12 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:11:16.089 12:10:12 bdev_raid -- scripts/common.sh@345 -- # : 1 00:11:16.089 12:10:12 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:16.089 12:10:12 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:16.089 12:10:12 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:11:16.089 12:10:12 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:11:16.089 12:10:12 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:16.089 12:10:12 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:11:16.089 12:10:12 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:11:16.089 12:10:12 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:11:16.089 12:10:12 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:11:16.089 12:10:12 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:16.089 12:10:12 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:11:16.089 12:10:12 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:11:16.089 12:10:12 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:16.089 12:10:12 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:16.089 12:10:12 bdev_raid -- scripts/common.sh@368 -- # return 0 00:11:16.089 12:10:12 bdev_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:16.089 12:10:12 bdev_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:16.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.089 --rc genhtml_branch_coverage=1 00:11:16.089 --rc genhtml_function_coverage=1 00:11:16.089 --rc genhtml_legend=1 00:11:16.089 --rc geninfo_all_blocks=1 00:11:16.089 --rc geninfo_unexecuted_blocks=1 00:11:16.089 00:11:16.089 ' 00:11:16.089 12:10:12 bdev_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:16.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.089 --rc genhtml_branch_coverage=1 00:11:16.089 --rc genhtml_function_coverage=1 00:11:16.089 --rc genhtml_legend=1 00:11:16.089 --rc geninfo_all_blocks=1 00:11:16.089 --rc geninfo_unexecuted_blocks=1 00:11:16.089 00:11:16.089 ' 00:11:16.089 12:10:12 bdev_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:16.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.089 --rc genhtml_branch_coverage=1 00:11:16.089 --rc genhtml_function_coverage=1 00:11:16.089 --rc genhtml_legend=1 00:11:16.089 --rc geninfo_all_blocks=1 00:11:16.089 --rc geninfo_unexecuted_blocks=1 00:11:16.089 00:11:16.089 ' 00:11:16.089 12:10:12 bdev_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:16.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.089 --rc genhtml_branch_coverage=1 00:11:16.089 --rc genhtml_function_coverage=1 00:11:16.089 --rc genhtml_legend=1 00:11:16.089 --rc geninfo_all_blocks=1 00:11:16.089 --rc geninfo_unexecuted_blocks=1 00:11:16.089 00:11:16.089 ' 00:11:16.089 12:10:12 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:11:16.089 12:10:12 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:11:16.089 12:10:12 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:11:16.089 12:10:12 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:11:16.089 12:10:12 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:11:16.089 12:10:12 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:11:16.089 12:10:12 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:11:16.089 12:10:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:16.089 12:10:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:16.089 12:10:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:16.089 ************************************ 00:11:16.089 START TEST raid1_resize_data_offset_test 00:11:16.089 ************************************ 00:11:16.089 12:10:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:11:16.089 12:10:12 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=59998 00:11:16.089 12:10:12 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:16.089 12:10:12 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 59998' 00:11:16.089 Process raid pid: 59998 00:11:16.089 12:10:12 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 59998 00:11:16.089 12:10:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 59998 ']' 00:11:16.089 12:10:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:16.089 12:10:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:16.089 12:10:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:16.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:16.089 12:10:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:16.089 12:10:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.349 [2024-11-25 12:10:12.199366] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:11:16.349 [2024-11-25 12:10:12.200431] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:16.349 [2024-11-25 12:10:12.388878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.609 [2024-11-25 12:10:12.520669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.868 [2024-11-25 12:10:12.728139] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:16.868 [2024-11-25 12:10:12.728208] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:17.437 12:10:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:17.437 12:10:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:11:17.437 12:10:13 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:11:17.437 12:10:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.437 12:10:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.437 malloc0 00:11:17.437 12:10:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.437 12:10:13 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:11:17.437 12:10:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.437 12:10:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.437 malloc1 00:11:17.437 12:10:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.437 12:10:13 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:11:17.437 12:10:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.437 12:10:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.437 null0 00:11:17.437 12:10:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.437 12:10:13 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:11:17.437 12:10:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.437 12:10:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.437 [2024-11-25 12:10:13.428580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:11:17.437 [2024-11-25 12:10:13.431605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:17.437 [2024-11-25 12:10:13.431710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:11:17.437 [2024-11-25 12:10:13.431970] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:17.437 [2024-11-25 12:10:13.431998] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:11:17.437 [2024-11-25 12:10:13.432439] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:11:17.437 [2024-11-25 12:10:13.432691] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:17.437 [2024-11-25 12:10:13.432716] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:11:17.437 [2024-11-25 12:10:13.433039] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:17.437 12:10:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.437 12:10:13 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.437 12:10:13 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:11:17.437 12:10:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.437 12:10:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.437 12:10:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.437 12:10:13 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:11:17.437 12:10:13 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:11:17.437 12:10:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.437 12:10:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.437 [2024-11-25 12:10:13.489061] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:11:17.437 12:10:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.437 12:10:13 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:11:17.437 12:10:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.437 12:10:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.004 malloc2 00:11:18.004 12:10:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.004 12:10:14 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:11:18.004 12:10:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.004 12:10:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.004 [2024-11-25 12:10:14.079052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:18.263 [2024-11-25 12:10:14.096170] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:18.263 12:10:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.263 [2024-11-25 12:10:14.098976] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:11:18.263 12:10:14 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.263 12:10:14 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:11:18.263 12:10:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.263 12:10:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.263 12:10:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.263 12:10:14 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:11:18.263 12:10:14 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 59998 00:11:18.263 12:10:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 59998 ']' 00:11:18.263 12:10:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 59998 00:11:18.263 12:10:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:11:18.263 12:10:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:18.263 12:10:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59998 00:11:18.263 killing process with pid 59998 00:11:18.263 12:10:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:18.263 12:10:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:18.264 12:10:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59998' 00:11:18.264 12:10:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 59998 00:11:18.264 12:10:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 59998 00:11:18.264 [2024-11-25 12:10:14.180888] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:18.264 [2024-11-25 12:10:14.183124] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:11:18.264 [2024-11-25 12:10:14.183203] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:18.264 [2024-11-25 12:10:14.183230] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:11:18.264 [2024-11-25 12:10:14.215620] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:18.264 [2024-11-25 12:10:14.216062] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:18.264 [2024-11-25 12:10:14.216088] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:11:20.167 [2024-11-25 12:10:15.865346] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:21.105 ************************************ 00:11:21.105 END TEST raid1_resize_data_offset_test 00:11:21.105 ************************************ 00:11:21.105 12:10:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:11:21.105 00:11:21.105 real 0m4.827s 00:11:21.105 user 0m4.825s 00:11:21.105 sys 0m0.635s 00:11:21.105 12:10:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:21.105 12:10:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.105 12:10:16 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:11:21.105 12:10:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:21.105 12:10:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:21.105 12:10:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:21.105 ************************************ 00:11:21.105 START TEST raid0_resize_superblock_test 00:11:21.105 ************************************ 00:11:21.105 12:10:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:11:21.105 12:10:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:11:21.105 Process raid pid: 60087 00:11:21.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.105 12:10:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60087 00:11:21.105 12:10:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60087' 00:11:21.105 12:10:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60087 00:11:21.105 12:10:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:21.105 12:10:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60087 ']' 00:11:21.105 12:10:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.105 12:10:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:21.105 12:10:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.105 12:10:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:21.105 12:10:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.105 [2024-11-25 12:10:17.078557] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:11:21.105 [2024-11-25 12:10:17.078738] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:21.364 [2024-11-25 12:10:17.271564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:21.364 [2024-11-25 12:10:17.426980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.623 [2024-11-25 12:10:17.641218] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:21.623 [2024-11-25 12:10:17.641279] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:22.192 12:10:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:22.192 12:10:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:22.192 12:10:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:11:22.192 12:10:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.192 12:10:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.760 malloc0 00:11:22.760 12:10:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.760 12:10:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:11:22.760 12:10:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.760 12:10:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.760 [2024-11-25 12:10:18.619463] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:11:22.760 [2024-11-25 12:10:18.619543] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:22.760 [2024-11-25 12:10:18.619589] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:22.760 [2024-11-25 12:10:18.619610] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:22.760 [2024-11-25 12:10:18.622777] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:22.760 [2024-11-25 12:10:18.622841] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:11:22.760 pt0 00:11:22.760 12:10:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.760 12:10:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:11:22.760 12:10:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.760 12:10:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.760 2ea04e16-5ef0-4585-824c-a1a826b6bd5e 00:11:22.760 12:10:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.760 12:10:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:11:22.760 12:10:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.760 12:10:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.760 19633092-47f2-4051-b32d-043a662d6c1e 00:11:22.760 12:10:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.760 12:10:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:11:22.760 12:10:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.760 12:10:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.760 0aaddee7-310c-4bb8-8fe2-a5ebef3191fc 00:11:22.760 12:10:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.760 12:10:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:11:22.761 12:10:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:11:22.761 12:10:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.761 12:10:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.761 [2024-11-25 12:10:18.767758] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 19633092-47f2-4051-b32d-043a662d6c1e is claimed 00:11:22.761 [2024-11-25 12:10:18.767892] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 0aaddee7-310c-4bb8-8fe2-a5ebef3191fc is claimed 00:11:22.761 [2024-11-25 12:10:18.768080] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:22.761 [2024-11-25 12:10:18.768106] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:11:22.761 [2024-11-25 12:10:18.768519] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:22.761 [2024-11-25 12:10:18.768806] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:22.761 [2024-11-25 12:10:18.768830] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:11:22.761 [2024-11-25 12:10:18.769043] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:22.761 12:10:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.761 12:10:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:11:22.761 12:10:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:11:22.761 12:10:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.761 12:10:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.761 12:10:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.761 12:10:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:11:22.761 12:10:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:11:22.761 12:10:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:11:22.761 12:10:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.761 12:10:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.021 12:10:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.021 12:10:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:11:23.021 12:10:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:11:23.021 12:10:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:11:23.021 12:10:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.021 12:10:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.021 12:10:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:11:23.021 12:10:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:11:23.021 [2024-11-25 12:10:18.888089] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:23.021 12:10:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.021 12:10:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:11:23.021 12:10:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:11:23.021 12:10:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:11:23.021 12:10:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:11:23.021 12:10:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.021 12:10:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.021 [2024-11-25 12:10:18.936123] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:11:23.021 [2024-11-25 12:10:18.936160] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '19633092-47f2-4051-b32d-043a662d6c1e' was resized: old size 131072, new size 204800 00:11:23.021 12:10:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.021 12:10:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:11:23.021 12:10:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.021 12:10:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.021 [2024-11-25 12:10:18.943945] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:11:23.021 [2024-11-25 12:10:18.943976] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '0aaddee7-310c-4bb8-8fe2-a5ebef3191fc' was resized: old size 131072, new size 204800 00:11:23.021 [2024-11-25 12:10:18.944016] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:11:23.021 12:10:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.021 12:10:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:11:23.021 12:10:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.021 12:10:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:11:23.021 12:10:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.021 12:10:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.021 12:10:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:11:23.021 12:10:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:11:23.021 12:10:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:11:23.021 12:10:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.021 12:10:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.021 12:10:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.021 12:10:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:11:23.021 12:10:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:11:23.021 12:10:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:11:23.021 12:10:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.021 12:10:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:11:23.021 12:10:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:11:23.021 12:10:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.021 [2024-11-25 12:10:19.052116] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:23.021 12:10:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.021 12:10:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:11:23.021 12:10:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:11:23.021 12:10:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:11:23.021 12:10:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:11:23.021 12:10:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.021 12:10:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.021 [2024-11-25 12:10:19.103870] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:11:23.021 [2024-11-25 12:10:19.104097] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:11:23.021 [2024-11-25 12:10:19.104162] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:23.021 [2024-11-25 12:10:19.104413] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:11:23.021 [2024-11-25 12:10:19.104612] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:23.021 [2024-11-25 12:10:19.104710] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:23.021 [2024-11-25 12:10:19.104895] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:11:23.021 12:10:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.021 12:10:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:11:23.021 12:10:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.283 12:10:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.283 [2024-11-25 12:10:19.115829] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:11:23.283 [2024-11-25 12:10:19.116055] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:23.283 [2024-11-25 12:10:19.116129] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:11:23.283 [2024-11-25 12:10:19.116405] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:23.283 [2024-11-25 12:10:19.119399] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:23.283 [2024-11-25 12:10:19.119569] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:11:23.283 pt0 00:11:23.283 12:10:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.283 12:10:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:11:23.283 12:10:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.283 12:10:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.283 [2024-11-25 12:10:19.122200] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 19633092-47f2-4051-b32d-043a662d6c1e 00:11:23.283 [2024-11-25 12:10:19.122290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 19633092-47f2-4051-b32d-043a662d6c1e is claimed 00:11:23.283 [2024-11-25 12:10:19.122457] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 0aaddee7-310c-4bb8-8fe2-a5ebef3191fc 00:11:23.283 [2024-11-25 12:10:19.122504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 0aaddee7-310c-4bb8-8fe2-a5ebef3191fc is claimed 00:11:23.283 [2024-11-25 12:10:19.122668] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 0aaddee7-310c-4bb8-8fe2-a5ebef3191fc (2) smaller than existing raid bdev Raid (3) 00:11:23.283 [2024-11-25 12:10:19.122778] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 19633092-47f2-4051-b32d-043a662d6c1e: File exists 00:11:23.283 [2024-11-25 12:10:19.122837] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:11:23.283 [2024-11-25 12:10:19.122857] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:11:23.283 [2024-11-25 12:10:19.123181] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:23.283 [2024-11-25 12:10:19.123400] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:11:23.283 [2024-11-25 12:10:19.123416] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:11:23.283 [2024-11-25 12:10:19.123615] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:23.283 12:10:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.283 12:10:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:11:23.283 12:10:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:11:23.283 12:10:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.283 12:10:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:11:23.283 12:10:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:11:23.283 12:10:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.283 [2024-11-25 12:10:19.136200] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:23.283 12:10:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.283 12:10:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:11:23.283 12:10:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:11:23.283 12:10:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:11:23.283 12:10:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60087 00:11:23.283 12:10:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60087 ']' 00:11:23.283 12:10:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60087 00:11:23.283 12:10:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:23.283 12:10:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:23.284 12:10:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60087 00:11:23.284 killing process with pid 60087 00:11:23.284 12:10:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:23.284 12:10:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:23.284 12:10:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60087' 00:11:23.284 12:10:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60087 00:11:23.284 [2024-11-25 12:10:19.214365] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:23.284 12:10:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60087 00:11:23.284 [2024-11-25 12:10:19.214485] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:23.284 [2024-11-25 12:10:19.214554] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:23.284 [2024-11-25 12:10:19.214570] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:11:24.746 [2024-11-25 12:10:20.517766] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:25.683 ************************************ 00:11:25.683 END TEST raid0_resize_superblock_test 00:11:25.683 ************************************ 00:11:25.683 12:10:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:11:25.683 00:11:25.683 real 0m4.582s 00:11:25.683 user 0m4.895s 00:11:25.683 sys 0m0.648s 00:11:25.683 12:10:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:25.683 12:10:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.683 12:10:21 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:11:25.683 12:10:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:25.683 12:10:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:25.683 12:10:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:25.683 ************************************ 00:11:25.683 START TEST raid1_resize_superblock_test 00:11:25.683 ************************************ 00:11:25.683 12:10:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:11:25.683 12:10:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:11:25.683 Process raid pid: 60186 00:11:25.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:25.683 12:10:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60186 00:11:25.683 12:10:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60186' 00:11:25.683 12:10:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60186 00:11:25.683 12:10:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:25.683 12:10:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60186 ']' 00:11:25.683 12:10:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:25.683 12:10:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:25.683 12:10:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:25.683 12:10:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:25.683 12:10:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.683 [2024-11-25 12:10:21.710804] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:11:25.683 [2024-11-25 12:10:21.711219] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:25.942 [2024-11-25 12:10:21.911866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:26.201 [2024-11-25 12:10:22.068395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.459 [2024-11-25 12:10:22.302010] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:26.459 [2024-11-25 12:10:22.302067] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:26.718 12:10:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:26.718 12:10:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:26.718 12:10:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:11:26.718 12:10:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.718 12:10:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.285 malloc0 00:11:27.285 12:10:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.285 12:10:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:11:27.285 12:10:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.285 12:10:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.285 [2024-11-25 12:10:23.229739] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:11:27.285 [2024-11-25 12:10:23.229820] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:27.285 [2024-11-25 12:10:23.229857] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:27.285 [2024-11-25 12:10:23.229879] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:27.285 [2024-11-25 12:10:23.232680] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:27.285 [2024-11-25 12:10:23.232728] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:11:27.285 pt0 00:11:27.285 12:10:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.285 12:10:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:11:27.285 12:10:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.285 12:10:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.285 071a3805-110b-4a3d-8b29-91c6d73d4fb1 00:11:27.285 12:10:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.285 12:10:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:11:27.285 12:10:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.285 12:10:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.285 0b85e543-3863-4000-b31c-7372af1bcba0 00:11:27.285 12:10:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.285 12:10:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:11:27.285 12:10:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.285 12:10:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.285 7d48e0ef-ce4e-4aea-8af0-720d0f0ea1ca 00:11:27.285 12:10:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.285 12:10:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:11:27.285 12:10:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:11:27.285 12:10:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.285 12:10:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.285 [2024-11-25 12:10:23.364757] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 0b85e543-3863-4000-b31c-7372af1bcba0 is claimed 00:11:27.285 [2024-11-25 12:10:23.364878] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 7d48e0ef-ce4e-4aea-8af0-720d0f0ea1ca is claimed 00:11:27.285 [2024-11-25 12:10:23.365078] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:27.285 [2024-11-25 12:10:23.365104] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:11:27.285 [2024-11-25 12:10:23.365485] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:27.285 [2024-11-25 12:10:23.365745] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:27.285 [2024-11-25 12:10:23.365762] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:11:27.285 [2024-11-25 12:10:23.365971] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:27.285 12:10:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.285 12:10:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:11:27.285 12:10:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.285 12:10:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.285 12:10:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:11:27.543 12:10:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.543 12:10:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:11:27.543 12:10:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:11:27.544 12:10:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.544 12:10:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.544 12:10:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:11:27.544 12:10:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.544 12:10:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:11:27.544 12:10:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:11:27.544 12:10:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:11:27.544 12:10:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.544 12:10:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.544 12:10:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:11:27.544 12:10:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:11:27.544 [2024-11-25 12:10:23.493031] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:27.544 12:10:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.544 12:10:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:11:27.544 12:10:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:11:27.544 12:10:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:11:27.544 12:10:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:11:27.544 12:10:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.544 12:10:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.544 [2024-11-25 12:10:23.545095] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:11:27.544 [2024-11-25 12:10:23.545252] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '0b85e543-3863-4000-b31c-7372af1bcba0' was resized: old size 131072, new size 204800 00:11:27.544 12:10:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.544 12:10:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:11:27.544 12:10:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.544 12:10:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.544 [2024-11-25 12:10:23.552930] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:11:27.544 [2024-11-25 12:10:23.553066] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '7d48e0ef-ce4e-4aea-8af0-720d0f0ea1ca' was resized: old size 131072, new size 204800 00:11:27.544 [2024-11-25 12:10:23.553230] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:11:27.544 12:10:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.544 12:10:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:11:27.544 12:10:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.544 12:10:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:11:27.544 12:10:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.544 12:10:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.544 12:10:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:11:27.544 12:10:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:11:27.544 12:10:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:11:27.544 12:10:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.544 12:10:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.815 12:10:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.815 12:10:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:11:27.815 12:10:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:11:27.815 12:10:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:11:27.815 12:10:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.816 12:10:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:11:27.816 12:10:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:11:27.816 12:10:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.816 [2024-11-25 12:10:23.677156] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:27.816 12:10:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.816 12:10:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:11:27.816 12:10:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:11:27.816 12:10:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:11:27.816 12:10:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:11:27.816 12:10:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.816 12:10:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.816 [2024-11-25 12:10:23.728894] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:11:27.816 [2024-11-25 12:10:23.729152] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:11:27.816 [2024-11-25 12:10:23.729330] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:11:27.816 [2024-11-25 12:10:23.729719] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:27.816 [2024-11-25 12:10:23.730182] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:27.816 [2024-11-25 12:10:23.730469] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:27.816 [2024-11-25 12:10:23.730639] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:11:27.816 12:10:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.816 12:10:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:11:27.816 12:10:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.816 12:10:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.816 [2024-11-25 12:10:23.736755] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:11:27.816 [2024-11-25 12:10:23.736824] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:27.816 [2024-11-25 12:10:23.736858] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:11:27.816 [2024-11-25 12:10:23.736881] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:27.816 [2024-11-25 12:10:23.739716] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:27.816 [2024-11-25 12:10:23.739767] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:11:27.816 pt0 00:11:27.816 12:10:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.816 12:10:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:11:27.816 12:10:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.816 [2024-11-25 12:10:23.742141] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 0b85e543-3863-4000-b31c-7372af1bcba0 00:11:27.816 [2024-11-25 12:10:23.742223] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 0b85e543-3863-4000-b31c-7372af1bcba0 is claimed 00:11:27.816 12:10:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.816 [2024-11-25 12:10:23.742395] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 7d48e0ef-ce4e-4aea-8af0-720d0f0ea1ca 00:11:27.816 [2024-11-25 12:10:23.742433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 7d48e0ef-ce4e-4aea-8af0-720d0f0ea1ca is claimed 00:11:27.816 [2024-11-25 12:10:23.742587] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 7d48e0ef-ce4e-4aea-8af0-720d0f0ea1ca (2) smaller than existing raid bdev Raid (3) 00:11:27.816 [2024-11-25 12:10:23.742618] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 0b85e543-3863-4000-b31c-7372af1bcba0: File exists 00:11:27.816 [2024-11-25 12:10:23.742670] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:11:27.816 [2024-11-25 12:10:23.742689] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:11:27.816 [2024-11-25 12:10:23.743046] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:27.816 [2024-11-25 12:10:23.743462] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:11:27.816 [2024-11-25 12:10:23.743486] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:11:27.816 [2024-11-25 12:10:23.743680] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:27.816 12:10:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.816 12:10:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:11:27.816 12:10:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:11:27.816 12:10:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:11:27.816 12:10:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:11:27.816 12:10:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.816 12:10:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.816 [2024-11-25 12:10:23.757106] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:27.816 12:10:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.816 12:10:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:11:27.816 12:10:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:11:27.816 12:10:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:11:27.816 12:10:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60186 00:11:27.816 12:10:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60186 ']' 00:11:27.816 12:10:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60186 00:11:27.816 12:10:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:27.816 12:10:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:27.816 12:10:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60186 00:11:27.816 killing process with pid 60186 00:11:27.816 12:10:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:27.816 12:10:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:27.816 12:10:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60186' 00:11:27.816 12:10:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60186 00:11:27.816 [2024-11-25 12:10:23.830564] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:27.816 12:10:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60186 00:11:27.816 [2024-11-25 12:10:23.830671] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:27.816 [2024-11-25 12:10:23.830745] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:27.816 [2024-11-25 12:10:23.830759] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:11:29.190 [2024-11-25 12:10:25.165970] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:30.122 12:10:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:11:30.122 00:11:30.122 real 0m4.600s 00:11:30.122 user 0m4.909s 00:11:30.122 sys 0m0.640s 00:11:30.122 12:10:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:30.122 ************************************ 00:11:30.122 END TEST raid1_resize_superblock_test 00:11:30.122 ************************************ 00:11:30.122 12:10:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.380 12:10:26 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:11:30.380 12:10:26 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:11:30.380 12:10:26 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:11:30.380 12:10:26 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:11:30.380 12:10:26 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:11:30.380 12:10:26 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:11:30.380 12:10:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:30.380 12:10:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:30.380 12:10:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:30.380 ************************************ 00:11:30.380 START TEST raid_function_test_raid0 00:11:30.380 ************************************ 00:11:30.380 12:10:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:11:30.380 12:10:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:11:30.381 12:10:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:11:30.381 12:10:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:11:30.381 12:10:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60283 00:11:30.381 Process raid pid: 60283 00:11:30.381 12:10:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60283' 00:11:30.381 12:10:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60283 00:11:30.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:30.381 12:10:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 60283 ']' 00:11:30.381 12:10:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:30.381 12:10:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:30.381 12:10:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:30.381 12:10:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:30.381 12:10:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:11:30.381 12:10:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:30.381 [2024-11-25 12:10:26.364034] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:11:30.381 [2024-11-25 12:10:26.364441] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:30.639 [2024-11-25 12:10:26.553445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:30.639 [2024-11-25 12:10:26.711588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.898 [2024-11-25 12:10:26.960195] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:30.898 [2024-11-25 12:10:26.960253] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:31.466 12:10:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:31.466 12:10:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:11:31.466 12:10:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:11:31.466 12:10:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.466 12:10:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:11:31.466 Base_1 00:11:31.466 12:10:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.466 12:10:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:11:31.466 12:10:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.466 12:10:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:11:31.466 Base_2 00:11:31.466 12:10:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.466 12:10:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:11:31.466 12:10:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.466 12:10:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:11:31.466 [2024-11-25 12:10:27.440901] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:11:31.466 [2024-11-25 12:10:27.443372] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:11:31.466 [2024-11-25 12:10:27.443475] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:31.466 [2024-11-25 12:10:27.443497] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:11:31.466 [2024-11-25 12:10:27.443816] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:31.466 [2024-11-25 12:10:27.444009] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:31.466 [2024-11-25 12:10:27.444026] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:11:31.466 [2024-11-25 12:10:27.444210] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:31.466 12:10:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.466 12:10:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:11:31.466 12:10:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:11:31.466 12:10:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.466 12:10:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:11:31.466 12:10:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.466 12:10:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:11:31.466 12:10:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:11:31.466 12:10:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:11:31.466 12:10:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:31.466 12:10:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:11:31.466 12:10:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:31.466 12:10:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:31.466 12:10:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:31.466 12:10:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:11:31.466 12:10:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:31.466 12:10:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:31.466 12:10:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:11:31.725 [2024-11-25 12:10:27.785058] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:31.725 /dev/nbd0 00:11:31.983 12:10:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:31.983 12:10:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:31.983 12:10:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:31.983 12:10:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:11:31.983 12:10:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:31.983 12:10:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:31.983 12:10:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:31.983 12:10:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:11:31.983 12:10:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:31.983 12:10:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:31.983 12:10:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:31.983 1+0 records in 00:11:31.983 1+0 records out 00:11:31.983 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000348704 s, 11.7 MB/s 00:11:31.983 12:10:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:31.983 12:10:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:11:31.983 12:10:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:31.983 12:10:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:31.983 12:10:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:11:31.983 12:10:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:31.983 12:10:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:31.983 12:10:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:11:31.983 12:10:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:11:31.983 12:10:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:11:32.242 12:10:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:32.242 { 00:11:32.242 "nbd_device": "/dev/nbd0", 00:11:32.242 "bdev_name": "raid" 00:11:32.242 } 00:11:32.242 ]' 00:11:32.242 12:10:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:32.242 { 00:11:32.242 "nbd_device": "/dev/nbd0", 00:11:32.242 "bdev_name": "raid" 00:11:32.242 } 00:11:32.242 ]' 00:11:32.242 12:10:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:32.242 12:10:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:11:32.242 12:10:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:11:32.242 12:10:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:32.242 12:10:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:11:32.242 12:10:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:11:32.242 12:10:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:11:32.242 12:10:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:11:32.242 12:10:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:11:32.242 12:10:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:11:32.242 12:10:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:11:32.242 12:10:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:11:32.242 12:10:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:11:32.242 12:10:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:11:32.242 12:10:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:11:32.242 12:10:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:11:32.242 12:10:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:11:32.242 12:10:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:11:32.242 12:10:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:11:32.242 12:10:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:11:32.242 12:10:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:11:32.242 12:10:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:11:32.242 12:10:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:11:32.242 12:10:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:11:32.242 12:10:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:11:32.242 4096+0 records in 00:11:32.242 4096+0 records out 00:11:32.242 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0307556 s, 68.2 MB/s 00:11:32.242 12:10:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:11:32.545 4096+0 records in 00:11:32.545 4096+0 records out 00:11:32.545 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.35288 s, 5.9 MB/s 00:11:32.545 12:10:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:11:32.545 12:10:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:11:32.829 12:10:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:11:32.829 12:10:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:11:32.829 12:10:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:11:32.829 12:10:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:11:32.829 12:10:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:11:32.829 128+0 records in 00:11:32.829 128+0 records out 00:11:32.829 65536 bytes (66 kB, 64 KiB) copied, 0.000724252 s, 90.5 MB/s 00:11:32.829 12:10:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:11:32.829 12:10:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:11:32.829 12:10:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:11:32.829 12:10:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:11:32.829 12:10:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:11:32.829 12:10:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:11:32.829 12:10:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:11:32.829 12:10:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:11:32.829 2035+0 records in 00:11:32.829 2035+0 records out 00:11:32.829 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00841876 s, 124 MB/s 00:11:32.829 12:10:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:11:32.829 12:10:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:11:32.829 12:10:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:11:32.829 12:10:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:11:32.829 12:10:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:11:32.829 12:10:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:11:32.829 12:10:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:11:32.829 12:10:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:11:32.829 456+0 records in 00:11:32.829 456+0 records out 00:11:32.829 233472 bytes (233 kB, 228 KiB) copied, 0.00195252 s, 120 MB/s 00:11:32.829 12:10:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:11:32.829 12:10:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:11:32.830 12:10:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:11:32.830 12:10:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:11:32.830 12:10:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:11:32.830 12:10:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:11:32.830 12:10:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:32.830 12:10:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:32.830 12:10:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:32.830 12:10:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:32.830 12:10:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:11:32.830 12:10:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:32.830 12:10:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:33.088 [2024-11-25 12:10:29.029379] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:33.088 12:10:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:33.088 12:10:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:33.088 12:10:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:33.088 12:10:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:33.088 12:10:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:33.088 12:10:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:33.088 12:10:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:11:33.088 12:10:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:11:33.088 12:10:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:11:33.088 12:10:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:11:33.088 12:10:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:11:33.347 12:10:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:33.347 12:10:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:33.347 12:10:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:33.606 12:10:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:33.606 12:10:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:11:33.606 12:10:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:33.606 12:10:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:11:33.606 12:10:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:11:33.606 12:10:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:11:33.606 12:10:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:11:33.606 12:10:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:11:33.606 12:10:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60283 00:11:33.606 12:10:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 60283 ']' 00:11:33.606 12:10:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 60283 00:11:33.606 12:10:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:11:33.606 12:10:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:33.606 12:10:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60283 00:11:33.606 12:10:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:33.606 12:10:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:33.606 killing process with pid 60283 00:11:33.606 12:10:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60283' 00:11:33.606 12:10:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 60283 00:11:33.606 [2024-11-25 12:10:29.499221] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:33.606 12:10:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 60283 00:11:33.606 [2024-11-25 12:10:29.499338] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:33.606 [2024-11-25 12:10:29.499421] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:33.606 [2024-11-25 12:10:29.499472] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:11:33.606 [2024-11-25 12:10:29.690644] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:35.028 12:10:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:11:35.028 00:11:35.028 real 0m4.449s 00:11:35.028 user 0m5.578s 00:11:35.028 sys 0m1.006s 00:11:35.028 12:10:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:35.028 12:10:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:11:35.028 ************************************ 00:11:35.028 END TEST raid_function_test_raid0 00:11:35.028 ************************************ 00:11:35.028 12:10:30 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:11:35.028 12:10:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:35.028 12:10:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:35.028 12:10:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:35.028 ************************************ 00:11:35.028 START TEST raid_function_test_concat 00:11:35.028 ************************************ 00:11:35.028 12:10:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:11:35.028 12:10:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:11:35.028 12:10:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:11:35.028 12:10:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:11:35.028 12:10:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60423 00:11:35.028 Process raid pid: 60423 00:11:35.028 12:10:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60423' 00:11:35.028 12:10:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:35.028 12:10:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60423 00:11:35.028 12:10:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 60423 ']' 00:11:35.028 12:10:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.028 12:10:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:35.028 12:10:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.028 12:10:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:35.028 12:10:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:11:35.028 [2024-11-25 12:10:30.877246] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:11:35.028 [2024-11-25 12:10:30.877443] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:35.028 [2024-11-25 12:10:31.062250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.286 [2024-11-25 12:10:31.193095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.545 [2024-11-25 12:10:31.401510] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:35.545 [2024-11-25 12:10:31.401565] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:35.803 12:10:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:35.803 12:10:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:11:35.803 12:10:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:11:35.803 12:10:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.803 12:10:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:11:35.803 Base_1 00:11:35.803 12:10:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.803 12:10:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:11:35.803 12:10:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.803 12:10:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:11:35.803 Base_2 00:11:35.803 12:10:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.803 12:10:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:11:35.803 12:10:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.803 12:10:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:11:35.803 [2024-11-25 12:10:31.891189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:11:36.061 [2024-11-25 12:10:31.893685] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:11:36.061 [2024-11-25 12:10:31.893792] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:36.061 [2024-11-25 12:10:31.893813] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:11:36.061 [2024-11-25 12:10:31.894158] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:36.061 [2024-11-25 12:10:31.894364] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:36.061 [2024-11-25 12:10:31.894382] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:11:36.061 [2024-11-25 12:10:31.894572] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:36.061 12:10:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.061 12:10:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:11:36.061 12:10:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:11:36.061 12:10:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.061 12:10:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:11:36.061 12:10:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.061 12:10:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:11:36.061 12:10:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:11:36.061 12:10:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:11:36.061 12:10:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:36.062 12:10:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:11:36.062 12:10:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:36.062 12:10:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:36.062 12:10:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:36.062 12:10:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:11:36.062 12:10:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:36.062 12:10:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:36.062 12:10:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:11:36.320 [2024-11-25 12:10:32.211324] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:36.320 /dev/nbd0 00:11:36.320 12:10:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:36.320 12:10:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:36.320 12:10:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:36.320 12:10:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:11:36.320 12:10:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:36.320 12:10:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:36.320 12:10:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:36.320 12:10:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:11:36.320 12:10:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:36.320 12:10:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:36.320 12:10:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:36.320 1+0 records in 00:11:36.320 1+0 records out 00:11:36.320 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00028204 s, 14.5 MB/s 00:11:36.320 12:10:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:36.320 12:10:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:11:36.320 12:10:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:36.320 12:10:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:36.320 12:10:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:11:36.320 12:10:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:36.320 12:10:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:36.320 12:10:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:11:36.320 12:10:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:11:36.320 12:10:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:11:36.578 12:10:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:36.578 { 00:11:36.578 "nbd_device": "/dev/nbd0", 00:11:36.578 "bdev_name": "raid" 00:11:36.578 } 00:11:36.578 ]' 00:11:36.578 12:10:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:36.578 { 00:11:36.578 "nbd_device": "/dev/nbd0", 00:11:36.578 "bdev_name": "raid" 00:11:36.578 } 00:11:36.578 ]' 00:11:36.578 12:10:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:36.578 12:10:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:11:36.578 12:10:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:36.578 12:10:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:11:36.578 12:10:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:11:36.578 12:10:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:11:36.578 12:10:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:11:36.578 12:10:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:11:36.578 12:10:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:11:36.578 12:10:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:11:36.578 12:10:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:11:36.579 12:10:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:11:36.579 12:10:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:11:36.579 12:10:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:11:36.579 12:10:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:11:36.579 12:10:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:11:36.579 12:10:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:11:36.579 12:10:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:11:36.579 12:10:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:11:36.579 12:10:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:11:36.579 12:10:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:11:36.579 12:10:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:11:36.579 12:10:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:11:36.579 12:10:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:11:36.579 12:10:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:11:36.579 4096+0 records in 00:11:36.579 4096+0 records out 00:11:36.579 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0243359 s, 86.2 MB/s 00:11:36.579 12:10:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:11:37.146 4096+0 records in 00:11:37.146 4096+0 records out 00:11:37.146 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.366254 s, 5.7 MB/s 00:11:37.146 12:10:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:11:37.146 12:10:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:11:37.146 12:10:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:11:37.146 12:10:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:11:37.146 12:10:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:11:37.146 12:10:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:11:37.146 12:10:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:11:37.146 128+0 records in 00:11:37.146 128+0 records out 00:11:37.146 65536 bytes (66 kB, 64 KiB) copied, 0.000922612 s, 71.0 MB/s 00:11:37.146 12:10:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:11:37.146 12:10:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:11:37.146 12:10:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:11:37.146 12:10:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:11:37.146 12:10:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:11:37.146 12:10:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:11:37.146 12:10:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:11:37.146 12:10:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:11:37.146 2035+0 records in 00:11:37.146 2035+0 records out 00:11:37.146 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.011392 s, 91.5 MB/s 00:11:37.146 12:10:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:11:37.146 12:10:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:11:37.146 12:10:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:11:37.146 12:10:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:11:37.146 12:10:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:11:37.146 12:10:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:11:37.146 12:10:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:11:37.146 12:10:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:11:37.146 456+0 records in 00:11:37.146 456+0 records out 00:11:37.146 233472 bytes (233 kB, 228 KiB) copied, 0.00304509 s, 76.7 MB/s 00:11:37.146 12:10:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:11:37.146 12:10:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:11:37.146 12:10:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:11:37.146 12:10:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:11:37.146 12:10:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:11:37.146 12:10:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:11:37.146 12:10:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:37.146 12:10:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:37.146 12:10:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:37.146 12:10:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:37.146 12:10:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:11:37.146 12:10:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:37.146 12:10:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:37.405 12:10:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:37.405 [2024-11-25 12:10:33.411961] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:37.405 12:10:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:37.405 12:10:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:37.405 12:10:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:37.405 12:10:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:37.405 12:10:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:37.405 12:10:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:11:37.405 12:10:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:11:37.405 12:10:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:11:37.405 12:10:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:11:37.405 12:10:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:11:37.664 12:10:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:37.664 12:10:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:37.664 12:10:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:37.664 12:10:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:37.664 12:10:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:11:37.664 12:10:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:37.664 12:10:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:11:37.664 12:10:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:11:37.664 12:10:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:11:37.664 12:10:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:11:37.664 12:10:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:11:37.664 12:10:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60423 00:11:37.664 12:10:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 60423 ']' 00:11:37.664 12:10:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 60423 00:11:37.664 12:10:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:11:37.664 12:10:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:37.664 12:10:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60423 00:11:37.922 12:10:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:37.922 killing process with pid 60423 00:11:37.922 12:10:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:37.922 12:10:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60423' 00:11:37.922 12:10:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 60423 00:11:37.922 12:10:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 60423 00:11:37.922 [2024-11-25 12:10:33.765887] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:37.922 [2024-11-25 12:10:33.766019] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:37.922 [2024-11-25 12:10:33.766089] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:37.922 [2024-11-25 12:10:33.766109] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:11:37.922 [2024-11-25 12:10:33.951122] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:39.300 12:10:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:11:39.300 00:11:39.300 real 0m4.201s 00:11:39.300 user 0m5.136s 00:11:39.300 sys 0m0.966s 00:11:39.300 12:10:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:39.300 12:10:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:11:39.300 ************************************ 00:11:39.300 END TEST raid_function_test_concat 00:11:39.300 ************************************ 00:11:39.300 12:10:35 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:11:39.300 12:10:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:39.300 12:10:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:39.300 12:10:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:39.300 ************************************ 00:11:39.300 START TEST raid0_resize_test 00:11:39.300 ************************************ 00:11:39.300 12:10:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:11:39.300 12:10:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:11:39.300 12:10:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:11:39.300 12:10:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:11:39.300 12:10:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:11:39.300 12:10:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:11:39.300 12:10:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:11:39.300 12:10:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:11:39.300 12:10:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:11:39.300 12:10:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60551 00:11:39.300 12:10:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:39.300 12:10:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60551' 00:11:39.300 Process raid pid: 60551 00:11:39.300 12:10:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60551 00:11:39.300 12:10:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60551 ']' 00:11:39.300 12:10:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:39.300 12:10:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:39.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:39.300 12:10:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:39.300 12:10:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:39.300 12:10:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.300 [2024-11-25 12:10:35.127835] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:11:39.300 [2024-11-25 12:10:35.128029] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:39.300 [2024-11-25 12:10:35.312753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:39.578 [2024-11-25 12:10:35.445153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.578 [2024-11-25 12:10:35.654540] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:39.578 [2024-11-25 12:10:35.654623] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:40.145 12:10:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:40.145 12:10:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:11:40.145 12:10:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:11:40.145 12:10:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.145 12:10:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.145 Base_1 00:11:40.145 12:10:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.145 12:10:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:11:40.145 12:10:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.145 12:10:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.145 Base_2 00:11:40.145 12:10:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.145 12:10:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:11:40.145 12:10:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:11:40.145 12:10:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.145 12:10:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.145 [2024-11-25 12:10:36.129762] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:11:40.145 [2024-11-25 12:10:36.132129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:11:40.145 [2024-11-25 12:10:36.132208] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:40.145 [2024-11-25 12:10:36.132229] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:11:40.145 [2024-11-25 12:10:36.132585] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:11:40.145 [2024-11-25 12:10:36.132768] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:40.145 [2024-11-25 12:10:36.132785] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:11:40.145 [2024-11-25 12:10:36.132957] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:40.145 12:10:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.145 12:10:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:11:40.145 12:10:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.145 12:10:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.145 [2024-11-25 12:10:36.137772] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:11:40.145 [2024-11-25 12:10:36.137810] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:11:40.145 true 00:11:40.145 12:10:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.145 12:10:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:11:40.145 12:10:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:11:40.145 12:10:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.145 12:10:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.145 [2024-11-25 12:10:36.149950] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:40.145 12:10:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.145 12:10:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:11:40.145 12:10:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:11:40.145 12:10:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:11:40.145 12:10:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:11:40.145 12:10:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:11:40.145 12:10:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:11:40.145 12:10:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.145 12:10:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.145 [2024-11-25 12:10:36.197755] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:11:40.145 [2024-11-25 12:10:36.197789] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:11:40.145 [2024-11-25 12:10:36.197827] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:11:40.145 true 00:11:40.145 12:10:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.145 12:10:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:11:40.145 12:10:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:11:40.145 12:10:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.145 12:10:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.145 [2024-11-25 12:10:36.209973] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:40.145 12:10:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.405 12:10:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:11:40.405 12:10:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:11:40.405 12:10:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:11:40.405 12:10:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:11:40.405 12:10:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:11:40.405 12:10:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60551 00:11:40.405 12:10:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60551 ']' 00:11:40.405 12:10:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 60551 00:11:40.405 12:10:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:11:40.405 12:10:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:40.405 12:10:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60551 00:11:40.405 12:10:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:40.405 12:10:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:40.405 killing process with pid 60551 00:11:40.405 12:10:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60551' 00:11:40.405 12:10:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 60551 00:11:40.405 [2024-11-25 12:10:36.281713] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:40.405 12:10:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 60551 00:11:40.405 [2024-11-25 12:10:36.281829] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:40.405 [2024-11-25 12:10:36.281898] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:40.405 [2024-11-25 12:10:36.281913] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:11:40.405 [2024-11-25 12:10:36.297574] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:41.343 12:10:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:11:41.343 00:11:41.343 real 0m2.311s 00:11:41.343 user 0m2.554s 00:11:41.343 sys 0m0.389s 00:11:41.343 12:10:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:41.343 ************************************ 00:11:41.343 END TEST raid0_resize_test 00:11:41.343 ************************************ 00:11:41.343 12:10:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.343 12:10:37 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:11:41.343 12:10:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:41.343 12:10:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:41.343 12:10:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:41.343 ************************************ 00:11:41.343 START TEST raid1_resize_test 00:11:41.343 ************************************ 00:11:41.343 12:10:37 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:11:41.343 12:10:37 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:11:41.343 12:10:37 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:11:41.343 12:10:37 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:11:41.343 12:10:37 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:11:41.343 12:10:37 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:11:41.343 12:10:37 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:11:41.343 12:10:37 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:11:41.343 12:10:37 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:11:41.343 12:10:37 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60607 00:11:41.343 Process raid pid: 60607 00:11:41.343 12:10:37 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60607' 00:11:41.343 12:10:37 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60607 00:11:41.343 12:10:37 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:41.343 12:10:37 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60607 ']' 00:11:41.343 12:10:37 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.343 12:10:37 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:41.343 12:10:37 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.343 12:10:37 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:41.343 12:10:37 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.602 [2024-11-25 12:10:37.495829] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:11:41.602 [2024-11-25 12:10:37.496029] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:41.602 [2024-11-25 12:10:37.688229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.861 [2024-11-25 12:10:37.844225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.119 [2024-11-25 12:10:38.053121] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:42.119 [2024-11-25 12:10:38.053196] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:42.688 12:10:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:42.688 12:10:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:11:42.688 12:10:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:11:42.688 12:10:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.688 12:10:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.688 Base_1 00:11:42.688 12:10:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.688 12:10:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:11:42.688 12:10:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.688 12:10:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.688 Base_2 00:11:42.688 12:10:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.688 12:10:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:11:42.688 12:10:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:11:42.688 12:10:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.688 12:10:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.688 [2024-11-25 12:10:38.580904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:11:42.688 [2024-11-25 12:10:38.583305] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:11:42.688 [2024-11-25 12:10:38.583424] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:42.688 [2024-11-25 12:10:38.583445] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:42.688 [2024-11-25 12:10:38.583756] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:11:42.688 [2024-11-25 12:10:38.583925] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:42.688 [2024-11-25 12:10:38.583941] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:11:42.688 [2024-11-25 12:10:38.584115] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:42.688 12:10:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.688 12:10:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:11:42.688 12:10:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.688 12:10:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.688 [2024-11-25 12:10:38.588880] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:11:42.688 [2024-11-25 12:10:38.589045] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:11:42.688 true 00:11:42.688 12:10:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.688 12:10:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:11:42.688 12:10:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:11:42.688 12:10:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.688 12:10:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.688 [2024-11-25 12:10:38.601217] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:42.688 12:10:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.688 12:10:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:11:42.688 12:10:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:11:42.688 12:10:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:11:42.688 12:10:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:11:42.688 12:10:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:11:42.688 12:10:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:11:42.688 12:10:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.688 12:10:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.688 [2024-11-25 12:10:38.657047] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:11:42.688 [2024-11-25 12:10:38.657081] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:11:42.688 [2024-11-25 12:10:38.657124] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:11:42.688 true 00:11:42.688 12:10:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.689 12:10:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:11:42.689 12:10:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.689 12:10:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.689 12:10:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:11:42.689 [2024-11-25 12:10:38.669253] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:42.689 12:10:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.689 12:10:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:11:42.689 12:10:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:11:42.689 12:10:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:11:42.689 12:10:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:11:42.689 12:10:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:11:42.689 12:10:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60607 00:11:42.689 12:10:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60607 ']' 00:11:42.689 12:10:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 60607 00:11:42.689 12:10:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:11:42.689 12:10:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:42.689 12:10:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60607 00:11:42.689 killing process with pid 60607 00:11:42.689 12:10:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:42.689 12:10:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:42.689 12:10:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60607' 00:11:42.689 12:10:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 60607 00:11:42.689 [2024-11-25 12:10:38.761114] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:42.689 12:10:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 60607 00:11:42.689 [2024-11-25 12:10:38.761229] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:42.689 [2024-11-25 12:10:38.761907] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:42.689 [2024-11-25 12:10:38.761939] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:11:42.948 [2024-11-25 12:10:38.777214] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:43.881 ************************************ 00:11:43.881 END TEST raid1_resize_test 00:11:43.881 ************************************ 00:11:43.881 12:10:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:11:43.881 00:11:43.881 real 0m2.433s 00:11:43.881 user 0m2.797s 00:11:43.881 sys 0m0.368s 00:11:43.881 12:10:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:43.881 12:10:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.881 12:10:39 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:11:43.881 12:10:39 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:43.881 12:10:39 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:11:43.881 12:10:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:43.881 12:10:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:43.881 12:10:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:43.881 ************************************ 00:11:43.881 START TEST raid_state_function_test 00:11:43.881 ************************************ 00:11:43.881 12:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:11:43.881 12:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:11:43.881 12:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:11:43.881 12:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:43.882 12:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:43.882 12:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:43.882 12:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:43.882 12:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:43.882 12:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:43.882 12:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:43.882 12:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:43.882 12:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:43.882 12:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:43.882 12:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:43.882 12:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:43.882 12:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:43.882 12:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:43.882 12:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:43.882 12:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:43.882 12:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:11:43.882 12:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:43.882 12:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:43.882 12:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:43.882 12:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:43.882 Process raid pid: 60670 00:11:43.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.882 12:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60670 00:11:43.882 12:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:43.882 12:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60670' 00:11:43.882 12:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60670 00:11:43.882 12:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 60670 ']' 00:11:43.882 12:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.882 12:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:43.882 12:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.882 12:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:43.882 12:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.882 [2024-11-25 12:10:39.964696] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:11:43.882 [2024-11-25 12:10:39.964863] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:44.139 [2024-11-25 12:10:40.145389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.398 [2024-11-25 12:10:40.291822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.656 [2024-11-25 12:10:40.500442] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:44.656 [2024-11-25 12:10:40.500503] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:45.222 12:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:45.222 12:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:45.222 12:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:45.222 12:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.222 12:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.222 [2024-11-25 12:10:41.028980] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:45.222 [2024-11-25 12:10:41.029044] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:45.222 [2024-11-25 12:10:41.029062] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:45.222 [2024-11-25 12:10:41.029078] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:45.222 12:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.222 12:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:11:45.222 12:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:45.222 12:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:45.222 12:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:45.222 12:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:45.222 12:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:45.222 12:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.222 12:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.222 12:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.222 12:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.222 12:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:45.222 12:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.222 12:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.222 12:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.222 12:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.222 12:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.222 "name": "Existed_Raid", 00:11:45.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.222 "strip_size_kb": 64, 00:11:45.222 "state": "configuring", 00:11:45.222 "raid_level": "raid0", 00:11:45.222 "superblock": false, 00:11:45.222 "num_base_bdevs": 2, 00:11:45.222 "num_base_bdevs_discovered": 0, 00:11:45.222 "num_base_bdevs_operational": 2, 00:11:45.222 "base_bdevs_list": [ 00:11:45.222 { 00:11:45.222 "name": "BaseBdev1", 00:11:45.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.222 "is_configured": false, 00:11:45.222 "data_offset": 0, 00:11:45.222 "data_size": 0 00:11:45.222 }, 00:11:45.222 { 00:11:45.222 "name": "BaseBdev2", 00:11:45.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.222 "is_configured": false, 00:11:45.222 "data_offset": 0, 00:11:45.222 "data_size": 0 00:11:45.222 } 00:11:45.222 ] 00:11:45.222 }' 00:11:45.222 12:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.222 12:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.481 12:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:45.481 12:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.481 12:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.481 [2024-11-25 12:10:41.533069] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:45.481 [2024-11-25 12:10:41.533266] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:45.481 12:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.481 12:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:45.481 12:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.481 12:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.481 [2024-11-25 12:10:41.541045] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:45.481 [2024-11-25 12:10:41.541098] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:45.481 [2024-11-25 12:10:41.541114] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:45.481 [2024-11-25 12:10:41.541134] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:45.481 12:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.481 12:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:45.481 12:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.481 12:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.740 [2024-11-25 12:10:41.589949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:45.740 BaseBdev1 00:11:45.740 12:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.741 12:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:45.741 12:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:45.741 12:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:45.741 12:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:45.741 12:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:45.741 12:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:45.741 12:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:45.741 12:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.741 12:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.741 12:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.741 12:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:45.741 12:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.741 12:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.741 [ 00:11:45.741 { 00:11:45.741 "name": "BaseBdev1", 00:11:45.741 "aliases": [ 00:11:45.741 "fc63e70a-ab0f-42da-9f8f-242ed61b3d44" 00:11:45.741 ], 00:11:45.741 "product_name": "Malloc disk", 00:11:45.741 "block_size": 512, 00:11:45.741 "num_blocks": 65536, 00:11:45.741 "uuid": "fc63e70a-ab0f-42da-9f8f-242ed61b3d44", 00:11:45.741 "assigned_rate_limits": { 00:11:45.741 "rw_ios_per_sec": 0, 00:11:45.741 "rw_mbytes_per_sec": 0, 00:11:45.741 "r_mbytes_per_sec": 0, 00:11:45.741 "w_mbytes_per_sec": 0 00:11:45.741 }, 00:11:45.741 "claimed": true, 00:11:45.741 "claim_type": "exclusive_write", 00:11:45.741 "zoned": false, 00:11:45.741 "supported_io_types": { 00:11:45.741 "read": true, 00:11:45.741 "write": true, 00:11:45.741 "unmap": true, 00:11:45.741 "flush": true, 00:11:45.741 "reset": true, 00:11:45.741 "nvme_admin": false, 00:11:45.741 "nvme_io": false, 00:11:45.741 "nvme_io_md": false, 00:11:45.741 "write_zeroes": true, 00:11:45.741 "zcopy": true, 00:11:45.741 "get_zone_info": false, 00:11:45.741 "zone_management": false, 00:11:45.741 "zone_append": false, 00:11:45.741 "compare": false, 00:11:45.741 "compare_and_write": false, 00:11:45.741 "abort": true, 00:11:45.741 "seek_hole": false, 00:11:45.741 "seek_data": false, 00:11:45.741 "copy": true, 00:11:45.741 "nvme_iov_md": false 00:11:45.741 }, 00:11:45.741 "memory_domains": [ 00:11:45.741 { 00:11:45.741 "dma_device_id": "system", 00:11:45.741 "dma_device_type": 1 00:11:45.741 }, 00:11:45.741 { 00:11:45.741 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.741 "dma_device_type": 2 00:11:45.741 } 00:11:45.741 ], 00:11:45.741 "driver_specific": {} 00:11:45.741 } 00:11:45.741 ] 00:11:45.741 12:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.741 12:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:45.741 12:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:11:45.741 12:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:45.741 12:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:45.741 12:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:45.741 12:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:45.741 12:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:45.741 12:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.741 12:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.741 12:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.741 12:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.741 12:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.741 12:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.741 12:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.741 12:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:45.741 12:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.741 12:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.741 "name": "Existed_Raid", 00:11:45.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.741 "strip_size_kb": 64, 00:11:45.741 "state": "configuring", 00:11:45.741 "raid_level": "raid0", 00:11:45.741 "superblock": false, 00:11:45.741 "num_base_bdevs": 2, 00:11:45.741 "num_base_bdevs_discovered": 1, 00:11:45.741 "num_base_bdevs_operational": 2, 00:11:45.741 "base_bdevs_list": [ 00:11:45.741 { 00:11:45.741 "name": "BaseBdev1", 00:11:45.741 "uuid": "fc63e70a-ab0f-42da-9f8f-242ed61b3d44", 00:11:45.741 "is_configured": true, 00:11:45.741 "data_offset": 0, 00:11:45.741 "data_size": 65536 00:11:45.741 }, 00:11:45.741 { 00:11:45.741 "name": "BaseBdev2", 00:11:45.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.741 "is_configured": false, 00:11:45.741 "data_offset": 0, 00:11:45.741 "data_size": 0 00:11:45.741 } 00:11:45.741 ] 00:11:45.741 }' 00:11:45.741 12:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.741 12:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.308 12:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:46.308 12:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.308 12:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.308 [2024-11-25 12:10:42.130141] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:46.308 [2024-11-25 12:10:42.130207] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:46.308 12:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.308 12:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:46.308 12:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.308 12:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.308 [2024-11-25 12:10:42.138187] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:46.308 [2024-11-25 12:10:42.140717] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:46.308 [2024-11-25 12:10:42.140774] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:46.308 12:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.308 12:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:46.308 12:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:46.308 12:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:11:46.308 12:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:46.308 12:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:46.308 12:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:46.308 12:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:46.308 12:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:46.308 12:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.308 12:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.308 12:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.308 12:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.308 12:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:46.308 12:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.308 12:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.308 12:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.308 12:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.308 12:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.308 "name": "Existed_Raid", 00:11:46.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.308 "strip_size_kb": 64, 00:11:46.308 "state": "configuring", 00:11:46.308 "raid_level": "raid0", 00:11:46.308 "superblock": false, 00:11:46.308 "num_base_bdevs": 2, 00:11:46.309 "num_base_bdevs_discovered": 1, 00:11:46.309 "num_base_bdevs_operational": 2, 00:11:46.309 "base_bdevs_list": [ 00:11:46.309 { 00:11:46.309 "name": "BaseBdev1", 00:11:46.309 "uuid": "fc63e70a-ab0f-42da-9f8f-242ed61b3d44", 00:11:46.309 "is_configured": true, 00:11:46.309 "data_offset": 0, 00:11:46.309 "data_size": 65536 00:11:46.309 }, 00:11:46.309 { 00:11:46.309 "name": "BaseBdev2", 00:11:46.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.309 "is_configured": false, 00:11:46.309 "data_offset": 0, 00:11:46.309 "data_size": 0 00:11:46.309 } 00:11:46.309 ] 00:11:46.309 }' 00:11:46.309 12:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.309 12:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.568 12:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:46.568 12:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.568 12:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.827 [2024-11-25 12:10:42.668784] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:46.827 [2024-11-25 12:10:42.668859] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:46.827 [2024-11-25 12:10:42.668875] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:11:46.827 [2024-11-25 12:10:42.669214] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:46.827 [2024-11-25 12:10:42.669470] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:46.827 [2024-11-25 12:10:42.669507] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:46.827 [2024-11-25 12:10:42.669808] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:46.827 BaseBdev2 00:11:46.827 12:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.827 12:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:46.827 12:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:46.827 12:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:46.827 12:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:46.827 12:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:46.827 12:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:46.827 12:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:46.827 12:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.827 12:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.827 12:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.827 12:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:46.827 12:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.827 12:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.827 [ 00:11:46.827 { 00:11:46.827 "name": "BaseBdev2", 00:11:46.827 "aliases": [ 00:11:46.827 "33d18ee9-14d5-4928-8d65-77ff26c10e85" 00:11:46.827 ], 00:11:46.827 "product_name": "Malloc disk", 00:11:46.827 "block_size": 512, 00:11:46.827 "num_blocks": 65536, 00:11:46.827 "uuid": "33d18ee9-14d5-4928-8d65-77ff26c10e85", 00:11:46.827 "assigned_rate_limits": { 00:11:46.827 "rw_ios_per_sec": 0, 00:11:46.827 "rw_mbytes_per_sec": 0, 00:11:46.827 "r_mbytes_per_sec": 0, 00:11:46.827 "w_mbytes_per_sec": 0 00:11:46.827 }, 00:11:46.827 "claimed": true, 00:11:46.827 "claim_type": "exclusive_write", 00:11:46.827 "zoned": false, 00:11:46.827 "supported_io_types": { 00:11:46.827 "read": true, 00:11:46.827 "write": true, 00:11:46.827 "unmap": true, 00:11:46.827 "flush": true, 00:11:46.827 "reset": true, 00:11:46.827 "nvme_admin": false, 00:11:46.827 "nvme_io": false, 00:11:46.827 "nvme_io_md": false, 00:11:46.827 "write_zeroes": true, 00:11:46.827 "zcopy": true, 00:11:46.827 "get_zone_info": false, 00:11:46.827 "zone_management": false, 00:11:46.827 "zone_append": false, 00:11:46.827 "compare": false, 00:11:46.827 "compare_and_write": false, 00:11:46.827 "abort": true, 00:11:46.827 "seek_hole": false, 00:11:46.827 "seek_data": false, 00:11:46.827 "copy": true, 00:11:46.827 "nvme_iov_md": false 00:11:46.827 }, 00:11:46.827 "memory_domains": [ 00:11:46.827 { 00:11:46.827 "dma_device_id": "system", 00:11:46.827 "dma_device_type": 1 00:11:46.827 }, 00:11:46.827 { 00:11:46.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.827 "dma_device_type": 2 00:11:46.827 } 00:11:46.827 ], 00:11:46.827 "driver_specific": {} 00:11:46.827 } 00:11:46.827 ] 00:11:46.827 12:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.827 12:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:46.827 12:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:46.827 12:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:46.827 12:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:11:46.827 12:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:46.827 12:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:46.827 12:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:46.827 12:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:46.827 12:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:46.827 12:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.827 12:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.827 12:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.827 12:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.827 12:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.827 12:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.827 12:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:46.827 12:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.827 12:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.827 12:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.827 "name": "Existed_Raid", 00:11:46.827 "uuid": "84df2ee4-1377-4cca-89a1-5885f3f88966", 00:11:46.827 "strip_size_kb": 64, 00:11:46.827 "state": "online", 00:11:46.827 "raid_level": "raid0", 00:11:46.827 "superblock": false, 00:11:46.827 "num_base_bdevs": 2, 00:11:46.827 "num_base_bdevs_discovered": 2, 00:11:46.827 "num_base_bdevs_operational": 2, 00:11:46.827 "base_bdevs_list": [ 00:11:46.827 { 00:11:46.827 "name": "BaseBdev1", 00:11:46.827 "uuid": "fc63e70a-ab0f-42da-9f8f-242ed61b3d44", 00:11:46.827 "is_configured": true, 00:11:46.827 "data_offset": 0, 00:11:46.827 "data_size": 65536 00:11:46.827 }, 00:11:46.827 { 00:11:46.827 "name": "BaseBdev2", 00:11:46.827 "uuid": "33d18ee9-14d5-4928-8d65-77ff26c10e85", 00:11:46.827 "is_configured": true, 00:11:46.827 "data_offset": 0, 00:11:46.827 "data_size": 65536 00:11:46.827 } 00:11:46.827 ] 00:11:46.827 }' 00:11:46.827 12:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.827 12:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.395 12:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:47.395 12:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:47.395 12:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:47.395 12:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:47.395 12:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:47.395 12:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:47.395 12:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:47.395 12:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.395 12:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.395 12:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:47.395 [2024-11-25 12:10:43.189297] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:47.395 12:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.395 12:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:47.395 "name": "Existed_Raid", 00:11:47.395 "aliases": [ 00:11:47.395 "84df2ee4-1377-4cca-89a1-5885f3f88966" 00:11:47.395 ], 00:11:47.395 "product_name": "Raid Volume", 00:11:47.395 "block_size": 512, 00:11:47.395 "num_blocks": 131072, 00:11:47.395 "uuid": "84df2ee4-1377-4cca-89a1-5885f3f88966", 00:11:47.395 "assigned_rate_limits": { 00:11:47.395 "rw_ios_per_sec": 0, 00:11:47.395 "rw_mbytes_per_sec": 0, 00:11:47.395 "r_mbytes_per_sec": 0, 00:11:47.395 "w_mbytes_per_sec": 0 00:11:47.395 }, 00:11:47.395 "claimed": false, 00:11:47.395 "zoned": false, 00:11:47.395 "supported_io_types": { 00:11:47.395 "read": true, 00:11:47.395 "write": true, 00:11:47.395 "unmap": true, 00:11:47.395 "flush": true, 00:11:47.395 "reset": true, 00:11:47.395 "nvme_admin": false, 00:11:47.395 "nvme_io": false, 00:11:47.395 "nvme_io_md": false, 00:11:47.395 "write_zeroes": true, 00:11:47.395 "zcopy": false, 00:11:47.395 "get_zone_info": false, 00:11:47.395 "zone_management": false, 00:11:47.395 "zone_append": false, 00:11:47.395 "compare": false, 00:11:47.395 "compare_and_write": false, 00:11:47.395 "abort": false, 00:11:47.395 "seek_hole": false, 00:11:47.395 "seek_data": false, 00:11:47.395 "copy": false, 00:11:47.395 "nvme_iov_md": false 00:11:47.395 }, 00:11:47.395 "memory_domains": [ 00:11:47.395 { 00:11:47.395 "dma_device_id": "system", 00:11:47.395 "dma_device_type": 1 00:11:47.395 }, 00:11:47.395 { 00:11:47.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.395 "dma_device_type": 2 00:11:47.395 }, 00:11:47.395 { 00:11:47.395 "dma_device_id": "system", 00:11:47.395 "dma_device_type": 1 00:11:47.395 }, 00:11:47.395 { 00:11:47.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.395 "dma_device_type": 2 00:11:47.395 } 00:11:47.395 ], 00:11:47.395 "driver_specific": { 00:11:47.395 "raid": { 00:11:47.395 "uuid": "84df2ee4-1377-4cca-89a1-5885f3f88966", 00:11:47.395 "strip_size_kb": 64, 00:11:47.395 "state": "online", 00:11:47.395 "raid_level": "raid0", 00:11:47.395 "superblock": false, 00:11:47.395 "num_base_bdevs": 2, 00:11:47.395 "num_base_bdevs_discovered": 2, 00:11:47.395 "num_base_bdevs_operational": 2, 00:11:47.395 "base_bdevs_list": [ 00:11:47.395 { 00:11:47.395 "name": "BaseBdev1", 00:11:47.395 "uuid": "fc63e70a-ab0f-42da-9f8f-242ed61b3d44", 00:11:47.395 "is_configured": true, 00:11:47.395 "data_offset": 0, 00:11:47.395 "data_size": 65536 00:11:47.395 }, 00:11:47.395 { 00:11:47.395 "name": "BaseBdev2", 00:11:47.395 "uuid": "33d18ee9-14d5-4928-8d65-77ff26c10e85", 00:11:47.395 "is_configured": true, 00:11:47.395 "data_offset": 0, 00:11:47.395 "data_size": 65536 00:11:47.395 } 00:11:47.395 ] 00:11:47.395 } 00:11:47.395 } 00:11:47.395 }' 00:11:47.395 12:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:47.395 12:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:47.395 BaseBdev2' 00:11:47.395 12:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.395 12:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:47.395 12:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:47.395 12:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:47.395 12:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.395 12:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.395 12:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.395 12:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.395 12:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:47.395 12:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:47.395 12:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:47.395 12:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:47.395 12:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.395 12:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.395 12:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.395 12:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.395 12:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:47.395 12:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:47.395 12:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:47.395 12:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.395 12:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.395 [2024-11-25 12:10:43.481065] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:47.395 [2024-11-25 12:10:43.481113] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:47.395 [2024-11-25 12:10:43.481180] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:47.654 12:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.654 12:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:47.654 12:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:11:47.654 12:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:47.654 12:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:47.654 12:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:47.654 12:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:11:47.654 12:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:47.654 12:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:47.654 12:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:47.654 12:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:47.654 12:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:47.654 12:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.654 12:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.654 12:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.654 12:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.654 12:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.654 12:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:47.654 12:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.654 12:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.654 12:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.654 12:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.654 "name": "Existed_Raid", 00:11:47.654 "uuid": "84df2ee4-1377-4cca-89a1-5885f3f88966", 00:11:47.654 "strip_size_kb": 64, 00:11:47.654 "state": "offline", 00:11:47.654 "raid_level": "raid0", 00:11:47.654 "superblock": false, 00:11:47.654 "num_base_bdevs": 2, 00:11:47.654 "num_base_bdevs_discovered": 1, 00:11:47.654 "num_base_bdevs_operational": 1, 00:11:47.654 "base_bdevs_list": [ 00:11:47.654 { 00:11:47.654 "name": null, 00:11:47.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.654 "is_configured": false, 00:11:47.654 "data_offset": 0, 00:11:47.654 "data_size": 65536 00:11:47.654 }, 00:11:47.654 { 00:11:47.654 "name": "BaseBdev2", 00:11:47.654 "uuid": "33d18ee9-14d5-4928-8d65-77ff26c10e85", 00:11:47.654 "is_configured": true, 00:11:47.654 "data_offset": 0, 00:11:47.654 "data_size": 65536 00:11:47.654 } 00:11:47.654 ] 00:11:47.654 }' 00:11:47.654 12:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.654 12:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.222 12:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:48.222 12:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:48.222 12:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.222 12:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.222 12:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.222 12:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:48.222 12:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.222 12:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:48.222 12:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:48.222 12:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:48.222 12:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.222 12:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.222 [2024-11-25 12:10:44.111945] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:48.222 [2024-11-25 12:10:44.112014] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:48.222 12:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.222 12:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:48.222 12:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:48.222 12:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.222 12:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:48.222 12:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.222 12:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.222 12:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.222 12:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:48.222 12:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:48.222 12:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:11:48.222 12:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60670 00:11:48.222 12:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 60670 ']' 00:11:48.222 12:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 60670 00:11:48.222 12:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:48.222 12:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:48.222 12:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60670 00:11:48.222 12:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:48.222 12:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:48.222 killing process with pid 60670 00:11:48.222 12:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60670' 00:11:48.222 12:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 60670 00:11:48.222 [2024-11-25 12:10:44.281966] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:48.222 12:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 60670 00:11:48.222 [2024-11-25 12:10:44.296802] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:49.599 12:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:49.599 00:11:49.599 real 0m5.465s 00:11:49.599 user 0m8.269s 00:11:49.599 sys 0m0.751s 00:11:49.599 12:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:49.599 12:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.599 ************************************ 00:11:49.599 END TEST raid_state_function_test 00:11:49.599 ************************************ 00:11:49.599 12:10:45 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:11:49.599 12:10:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:49.599 12:10:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:49.599 12:10:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:49.599 ************************************ 00:11:49.599 START TEST raid_state_function_test_sb 00:11:49.599 ************************************ 00:11:49.599 12:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:11:49.599 12:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:11:49.599 12:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:11:49.599 12:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:49.599 12:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:49.599 12:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:49.599 12:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:49.599 12:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:49.599 12:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:49.599 12:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:49.599 12:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:49.599 12:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:49.599 12:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:49.599 12:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:49.599 12:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:49.599 12:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:49.599 12:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:49.599 12:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:49.599 12:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:49.599 12:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:11:49.599 12:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:49.599 12:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:49.599 12:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:49.599 12:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:49.599 12:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=60924 00:11:49.599 12:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:49.599 Process raid pid: 60924 00:11:49.599 12:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60924' 00:11:49.599 12:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 60924 00:11:49.599 12:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 60924 ']' 00:11:49.599 12:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:49.599 12:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:49.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:49.599 12:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:49.599 12:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:49.599 12:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.599 [2024-11-25 12:10:45.492055] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:11:49.599 [2024-11-25 12:10:45.492230] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:49.599 [2024-11-25 12:10:45.665057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:49.859 [2024-11-25 12:10:45.826487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.118 [2024-11-25 12:10:46.075282] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:50.118 [2024-11-25 12:10:46.075402] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:50.685 12:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:50.685 12:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:50.685 12:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:50.685 12:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.685 12:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.685 [2024-11-25 12:10:46.481747] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:50.685 [2024-11-25 12:10:46.481813] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:50.685 [2024-11-25 12:10:46.481830] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:50.685 [2024-11-25 12:10:46.481846] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:50.685 12:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.685 12:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:11:50.685 12:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:50.685 12:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:50.685 12:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:50.685 12:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:50.685 12:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:50.685 12:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.685 12:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.685 12:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.685 12:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.685 12:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.685 12:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.685 12:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.685 12:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:50.685 12:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.685 12:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.685 "name": "Existed_Raid", 00:11:50.685 "uuid": "672ff7e0-6d2b-4d82-83ca-ffa333946835", 00:11:50.685 "strip_size_kb": 64, 00:11:50.685 "state": "configuring", 00:11:50.685 "raid_level": "raid0", 00:11:50.685 "superblock": true, 00:11:50.685 "num_base_bdevs": 2, 00:11:50.685 "num_base_bdevs_discovered": 0, 00:11:50.685 "num_base_bdevs_operational": 2, 00:11:50.685 "base_bdevs_list": [ 00:11:50.685 { 00:11:50.685 "name": "BaseBdev1", 00:11:50.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.685 "is_configured": false, 00:11:50.685 "data_offset": 0, 00:11:50.685 "data_size": 0 00:11:50.685 }, 00:11:50.685 { 00:11:50.685 "name": "BaseBdev2", 00:11:50.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.685 "is_configured": false, 00:11:50.685 "data_offset": 0, 00:11:50.685 "data_size": 0 00:11:50.685 } 00:11:50.685 ] 00:11:50.685 }' 00:11:50.685 12:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.685 12:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.944 12:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:50.944 12:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.944 12:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.944 [2024-11-25 12:10:46.957812] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:50.944 [2024-11-25 12:10:46.957856] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:50.944 12:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.944 12:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:50.944 12:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.944 12:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.944 [2024-11-25 12:10:46.965806] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:50.944 [2024-11-25 12:10:46.965855] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:50.944 [2024-11-25 12:10:46.965869] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:50.944 [2024-11-25 12:10:46.965887] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:50.944 12:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.944 12:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:50.944 12:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.944 12:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.944 [2024-11-25 12:10:47.010727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:50.944 BaseBdev1 00:11:50.944 12:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.944 12:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:50.944 12:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:50.944 12:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:50.944 12:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:50.944 12:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:50.944 12:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:50.944 12:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:50.944 12:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.944 12:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.944 12:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.944 12:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:50.944 12:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.944 12:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.944 [ 00:11:50.944 { 00:11:50.944 "name": "BaseBdev1", 00:11:50.944 "aliases": [ 00:11:50.944 "34c16a0b-7a20-4d9e-9720-ac65e806a56a" 00:11:50.944 ], 00:11:50.944 "product_name": "Malloc disk", 00:11:50.944 "block_size": 512, 00:11:50.944 "num_blocks": 65536, 00:11:51.203 "uuid": "34c16a0b-7a20-4d9e-9720-ac65e806a56a", 00:11:51.203 "assigned_rate_limits": { 00:11:51.203 "rw_ios_per_sec": 0, 00:11:51.203 "rw_mbytes_per_sec": 0, 00:11:51.203 "r_mbytes_per_sec": 0, 00:11:51.203 "w_mbytes_per_sec": 0 00:11:51.203 }, 00:11:51.203 "claimed": true, 00:11:51.203 "claim_type": "exclusive_write", 00:11:51.203 "zoned": false, 00:11:51.203 "supported_io_types": { 00:11:51.203 "read": true, 00:11:51.203 "write": true, 00:11:51.203 "unmap": true, 00:11:51.203 "flush": true, 00:11:51.203 "reset": true, 00:11:51.203 "nvme_admin": false, 00:11:51.203 "nvme_io": false, 00:11:51.203 "nvme_io_md": false, 00:11:51.203 "write_zeroes": true, 00:11:51.203 "zcopy": true, 00:11:51.203 "get_zone_info": false, 00:11:51.203 "zone_management": false, 00:11:51.203 "zone_append": false, 00:11:51.203 "compare": false, 00:11:51.203 "compare_and_write": false, 00:11:51.203 "abort": true, 00:11:51.203 "seek_hole": false, 00:11:51.203 "seek_data": false, 00:11:51.203 "copy": true, 00:11:51.203 "nvme_iov_md": false 00:11:51.203 }, 00:11:51.203 "memory_domains": [ 00:11:51.203 { 00:11:51.203 "dma_device_id": "system", 00:11:51.203 "dma_device_type": 1 00:11:51.203 }, 00:11:51.203 { 00:11:51.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.203 "dma_device_type": 2 00:11:51.203 } 00:11:51.203 ], 00:11:51.203 "driver_specific": {} 00:11:51.203 } 00:11:51.203 ] 00:11:51.203 12:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.203 12:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:51.203 12:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:11:51.203 12:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:51.203 12:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:51.203 12:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:51.203 12:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:51.203 12:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:51.203 12:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.203 12:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.203 12:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.203 12:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.203 12:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.203 12:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:51.203 12:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.203 12:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.203 12:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.203 12:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.203 "name": "Existed_Raid", 00:11:51.203 "uuid": "b9a6d67f-2374-43bd-a1e3-c9fec46dd38b", 00:11:51.203 "strip_size_kb": 64, 00:11:51.203 "state": "configuring", 00:11:51.203 "raid_level": "raid0", 00:11:51.203 "superblock": true, 00:11:51.203 "num_base_bdevs": 2, 00:11:51.203 "num_base_bdevs_discovered": 1, 00:11:51.203 "num_base_bdevs_operational": 2, 00:11:51.203 "base_bdevs_list": [ 00:11:51.203 { 00:11:51.203 "name": "BaseBdev1", 00:11:51.203 "uuid": "34c16a0b-7a20-4d9e-9720-ac65e806a56a", 00:11:51.203 "is_configured": true, 00:11:51.203 "data_offset": 2048, 00:11:51.203 "data_size": 63488 00:11:51.203 }, 00:11:51.203 { 00:11:51.203 "name": "BaseBdev2", 00:11:51.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.203 "is_configured": false, 00:11:51.203 "data_offset": 0, 00:11:51.203 "data_size": 0 00:11:51.203 } 00:11:51.203 ] 00:11:51.203 }' 00:11:51.203 12:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.203 12:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.462 12:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:51.462 12:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.462 12:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.462 [2024-11-25 12:10:47.550911] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:51.462 [2024-11-25 12:10:47.550974] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:51.720 12:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.720 12:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:51.720 12:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.720 12:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.720 [2024-11-25 12:10:47.558961] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:51.720 [2024-11-25 12:10:47.561380] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:51.720 [2024-11-25 12:10:47.561428] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:51.720 12:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.720 12:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:51.720 12:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:51.720 12:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:11:51.720 12:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:51.720 12:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:51.720 12:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:51.720 12:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:51.720 12:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:51.720 12:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.720 12:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.720 12:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.720 12:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.720 12:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.721 12:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.721 12:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.721 12:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:51.721 12:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.721 12:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.721 "name": "Existed_Raid", 00:11:51.721 "uuid": "741b8949-c050-4bed-8af1-92a6ba9205a1", 00:11:51.721 "strip_size_kb": 64, 00:11:51.721 "state": "configuring", 00:11:51.721 "raid_level": "raid0", 00:11:51.721 "superblock": true, 00:11:51.721 "num_base_bdevs": 2, 00:11:51.721 "num_base_bdevs_discovered": 1, 00:11:51.721 "num_base_bdevs_operational": 2, 00:11:51.721 "base_bdevs_list": [ 00:11:51.721 { 00:11:51.721 "name": "BaseBdev1", 00:11:51.721 "uuid": "34c16a0b-7a20-4d9e-9720-ac65e806a56a", 00:11:51.721 "is_configured": true, 00:11:51.721 "data_offset": 2048, 00:11:51.721 "data_size": 63488 00:11:51.721 }, 00:11:51.721 { 00:11:51.721 "name": "BaseBdev2", 00:11:51.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.721 "is_configured": false, 00:11:51.721 "data_offset": 0, 00:11:51.721 "data_size": 0 00:11:51.721 } 00:11:51.721 ] 00:11:51.721 }' 00:11:51.721 12:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.721 12:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.979 12:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:51.979 12:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.979 12:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.237 [2024-11-25 12:10:48.081799] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:52.237 [2024-11-25 12:10:48.082093] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:52.238 [2024-11-25 12:10:48.082113] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:52.238 [2024-11-25 12:10:48.082470] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:52.238 BaseBdev2 00:11:52.238 [2024-11-25 12:10:48.082673] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:52.238 [2024-11-25 12:10:48.082693] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:52.238 [2024-11-25 12:10:48.082859] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:52.238 12:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.238 12:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:52.238 12:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:52.238 12:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:52.238 12:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:52.238 12:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:52.238 12:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:52.238 12:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:52.238 12:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.238 12:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.238 12:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.238 12:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:52.238 12:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.238 12:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.238 [ 00:11:52.238 { 00:11:52.238 "name": "BaseBdev2", 00:11:52.238 "aliases": [ 00:11:52.238 "51d61602-9418-4ef0-9a3d-c4b1aa56262d" 00:11:52.238 ], 00:11:52.238 "product_name": "Malloc disk", 00:11:52.238 "block_size": 512, 00:11:52.238 "num_blocks": 65536, 00:11:52.238 "uuid": "51d61602-9418-4ef0-9a3d-c4b1aa56262d", 00:11:52.238 "assigned_rate_limits": { 00:11:52.238 "rw_ios_per_sec": 0, 00:11:52.238 "rw_mbytes_per_sec": 0, 00:11:52.238 "r_mbytes_per_sec": 0, 00:11:52.238 "w_mbytes_per_sec": 0 00:11:52.238 }, 00:11:52.238 "claimed": true, 00:11:52.238 "claim_type": "exclusive_write", 00:11:52.238 "zoned": false, 00:11:52.238 "supported_io_types": { 00:11:52.238 "read": true, 00:11:52.238 "write": true, 00:11:52.238 "unmap": true, 00:11:52.238 "flush": true, 00:11:52.238 "reset": true, 00:11:52.238 "nvme_admin": false, 00:11:52.238 "nvme_io": false, 00:11:52.238 "nvme_io_md": false, 00:11:52.238 "write_zeroes": true, 00:11:52.238 "zcopy": true, 00:11:52.238 "get_zone_info": false, 00:11:52.238 "zone_management": false, 00:11:52.238 "zone_append": false, 00:11:52.238 "compare": false, 00:11:52.238 "compare_and_write": false, 00:11:52.238 "abort": true, 00:11:52.238 "seek_hole": false, 00:11:52.238 "seek_data": false, 00:11:52.238 "copy": true, 00:11:52.238 "nvme_iov_md": false 00:11:52.238 }, 00:11:52.238 "memory_domains": [ 00:11:52.238 { 00:11:52.238 "dma_device_id": "system", 00:11:52.238 "dma_device_type": 1 00:11:52.238 }, 00:11:52.238 { 00:11:52.238 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.238 "dma_device_type": 2 00:11:52.238 } 00:11:52.238 ], 00:11:52.238 "driver_specific": {} 00:11:52.238 } 00:11:52.238 ] 00:11:52.238 12:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.238 12:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:52.238 12:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:52.238 12:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:52.238 12:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:11:52.238 12:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:52.238 12:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:52.238 12:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:52.238 12:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:52.238 12:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:52.238 12:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.238 12:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.238 12:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.238 12:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.238 12:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.238 12:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.238 12:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.238 12:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.238 12:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.238 12:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.238 "name": "Existed_Raid", 00:11:52.238 "uuid": "741b8949-c050-4bed-8af1-92a6ba9205a1", 00:11:52.238 "strip_size_kb": 64, 00:11:52.238 "state": "online", 00:11:52.238 "raid_level": "raid0", 00:11:52.238 "superblock": true, 00:11:52.238 "num_base_bdevs": 2, 00:11:52.238 "num_base_bdevs_discovered": 2, 00:11:52.238 "num_base_bdevs_operational": 2, 00:11:52.238 "base_bdevs_list": [ 00:11:52.238 { 00:11:52.238 "name": "BaseBdev1", 00:11:52.238 "uuid": "34c16a0b-7a20-4d9e-9720-ac65e806a56a", 00:11:52.238 "is_configured": true, 00:11:52.238 "data_offset": 2048, 00:11:52.238 "data_size": 63488 00:11:52.238 }, 00:11:52.238 { 00:11:52.238 "name": "BaseBdev2", 00:11:52.238 "uuid": "51d61602-9418-4ef0-9a3d-c4b1aa56262d", 00:11:52.238 "is_configured": true, 00:11:52.238 "data_offset": 2048, 00:11:52.238 "data_size": 63488 00:11:52.238 } 00:11:52.238 ] 00:11:52.238 }' 00:11:52.238 12:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.238 12:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.806 12:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:52.806 12:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:52.806 12:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:52.806 12:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:52.806 12:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:52.806 12:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:52.806 12:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:52.806 12:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:52.806 12:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.806 12:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.806 [2024-11-25 12:10:48.622385] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:52.806 12:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.806 12:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:52.806 "name": "Existed_Raid", 00:11:52.806 "aliases": [ 00:11:52.806 "741b8949-c050-4bed-8af1-92a6ba9205a1" 00:11:52.806 ], 00:11:52.806 "product_name": "Raid Volume", 00:11:52.806 "block_size": 512, 00:11:52.806 "num_blocks": 126976, 00:11:52.806 "uuid": "741b8949-c050-4bed-8af1-92a6ba9205a1", 00:11:52.806 "assigned_rate_limits": { 00:11:52.806 "rw_ios_per_sec": 0, 00:11:52.806 "rw_mbytes_per_sec": 0, 00:11:52.806 "r_mbytes_per_sec": 0, 00:11:52.806 "w_mbytes_per_sec": 0 00:11:52.806 }, 00:11:52.806 "claimed": false, 00:11:52.807 "zoned": false, 00:11:52.807 "supported_io_types": { 00:11:52.807 "read": true, 00:11:52.807 "write": true, 00:11:52.807 "unmap": true, 00:11:52.807 "flush": true, 00:11:52.807 "reset": true, 00:11:52.807 "nvme_admin": false, 00:11:52.807 "nvme_io": false, 00:11:52.807 "nvme_io_md": false, 00:11:52.807 "write_zeroes": true, 00:11:52.807 "zcopy": false, 00:11:52.807 "get_zone_info": false, 00:11:52.807 "zone_management": false, 00:11:52.807 "zone_append": false, 00:11:52.807 "compare": false, 00:11:52.807 "compare_and_write": false, 00:11:52.807 "abort": false, 00:11:52.807 "seek_hole": false, 00:11:52.807 "seek_data": false, 00:11:52.807 "copy": false, 00:11:52.807 "nvme_iov_md": false 00:11:52.807 }, 00:11:52.807 "memory_domains": [ 00:11:52.807 { 00:11:52.807 "dma_device_id": "system", 00:11:52.807 "dma_device_type": 1 00:11:52.807 }, 00:11:52.807 { 00:11:52.807 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.807 "dma_device_type": 2 00:11:52.807 }, 00:11:52.807 { 00:11:52.807 "dma_device_id": "system", 00:11:52.807 "dma_device_type": 1 00:11:52.807 }, 00:11:52.807 { 00:11:52.807 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.807 "dma_device_type": 2 00:11:52.807 } 00:11:52.807 ], 00:11:52.807 "driver_specific": { 00:11:52.807 "raid": { 00:11:52.807 "uuid": "741b8949-c050-4bed-8af1-92a6ba9205a1", 00:11:52.807 "strip_size_kb": 64, 00:11:52.807 "state": "online", 00:11:52.807 "raid_level": "raid0", 00:11:52.807 "superblock": true, 00:11:52.807 "num_base_bdevs": 2, 00:11:52.807 "num_base_bdevs_discovered": 2, 00:11:52.807 "num_base_bdevs_operational": 2, 00:11:52.807 "base_bdevs_list": [ 00:11:52.807 { 00:11:52.807 "name": "BaseBdev1", 00:11:52.807 "uuid": "34c16a0b-7a20-4d9e-9720-ac65e806a56a", 00:11:52.807 "is_configured": true, 00:11:52.807 "data_offset": 2048, 00:11:52.807 "data_size": 63488 00:11:52.807 }, 00:11:52.807 { 00:11:52.807 "name": "BaseBdev2", 00:11:52.807 "uuid": "51d61602-9418-4ef0-9a3d-c4b1aa56262d", 00:11:52.807 "is_configured": true, 00:11:52.807 "data_offset": 2048, 00:11:52.807 "data_size": 63488 00:11:52.807 } 00:11:52.807 ] 00:11:52.807 } 00:11:52.807 } 00:11:52.807 }' 00:11:52.807 12:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:52.807 12:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:52.807 BaseBdev2' 00:11:52.807 12:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:52.807 12:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:52.807 12:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:52.807 12:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:52.807 12:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:52.807 12:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.807 12:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.807 12:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.807 12:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:52.807 12:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:52.807 12:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:52.807 12:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:52.807 12:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:52.807 12:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.807 12:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.807 12:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.807 12:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:52.807 12:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:52.807 12:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:52.807 12:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.807 12:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.807 [2024-11-25 12:10:48.882177] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:52.807 [2024-11-25 12:10:48.882225] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:52.807 [2024-11-25 12:10:48.882291] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:53.066 12:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.066 12:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:53.066 12:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:11:53.066 12:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:53.066 12:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:53.066 12:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:53.066 12:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:11:53.066 12:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:53.066 12:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:53.066 12:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:53.066 12:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:53.066 12:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:53.066 12:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.066 12:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.066 12:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.067 12:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.067 12:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.067 12:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:53.067 12:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.067 12:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.067 12:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.067 12:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.067 "name": "Existed_Raid", 00:11:53.067 "uuid": "741b8949-c050-4bed-8af1-92a6ba9205a1", 00:11:53.067 "strip_size_kb": 64, 00:11:53.067 "state": "offline", 00:11:53.067 "raid_level": "raid0", 00:11:53.067 "superblock": true, 00:11:53.067 "num_base_bdevs": 2, 00:11:53.067 "num_base_bdevs_discovered": 1, 00:11:53.067 "num_base_bdevs_operational": 1, 00:11:53.067 "base_bdevs_list": [ 00:11:53.067 { 00:11:53.067 "name": null, 00:11:53.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.067 "is_configured": false, 00:11:53.067 "data_offset": 0, 00:11:53.067 "data_size": 63488 00:11:53.067 }, 00:11:53.067 { 00:11:53.067 "name": "BaseBdev2", 00:11:53.067 "uuid": "51d61602-9418-4ef0-9a3d-c4b1aa56262d", 00:11:53.067 "is_configured": true, 00:11:53.067 "data_offset": 2048, 00:11:53.067 "data_size": 63488 00:11:53.067 } 00:11:53.067 ] 00:11:53.067 }' 00:11:53.067 12:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.067 12:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.634 12:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:53.634 12:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:53.634 12:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.634 12:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:53.634 12:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.634 12:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.634 12:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.634 12:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:53.634 12:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:53.634 12:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:53.634 12:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.634 12:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.634 [2024-11-25 12:10:49.577606] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:53.634 [2024-11-25 12:10:49.577676] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:53.634 12:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.634 12:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:53.634 12:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:53.634 12:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.634 12:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:53.634 12:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.634 12:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.634 12:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.634 12:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:53.634 12:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:53.634 12:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:11:53.634 12:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 60924 00:11:53.634 12:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 60924 ']' 00:11:53.634 12:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 60924 00:11:53.634 12:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:53.924 12:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:53.924 12:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60924 00:11:53.924 killing process with pid 60924 00:11:53.924 12:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:53.924 12:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:53.924 12:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60924' 00:11:53.924 12:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 60924 00:11:53.924 [2024-11-25 12:10:49.750875] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:53.924 12:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 60924 00:11:53.924 [2024-11-25 12:10:49.765682] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:54.864 ************************************ 00:11:54.864 END TEST raid_state_function_test_sb 00:11:54.864 ************************************ 00:11:54.864 12:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:54.864 00:11:54.864 real 0m5.436s 00:11:54.864 user 0m8.228s 00:11:54.864 sys 0m0.737s 00:11:54.864 12:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:54.864 12:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.864 12:10:50 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:11:54.864 12:10:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:54.864 12:10:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:54.864 12:10:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:54.864 ************************************ 00:11:54.864 START TEST raid_superblock_test 00:11:54.864 ************************************ 00:11:54.864 12:10:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:11:54.864 12:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:11:54.864 12:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:11:54.864 12:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:54.864 12:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:54.864 12:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:54.864 12:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:54.864 12:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:54.864 12:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:54.864 12:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:54.864 12:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:54.864 12:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:54.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:54.864 12:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:54.864 12:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:54.864 12:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:11:54.864 12:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:54.864 12:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:54.864 12:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61180 00:11:54.864 12:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61180 00:11:54.864 12:10:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:54.864 12:10:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61180 ']' 00:11:54.864 12:10:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:54.864 12:10:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:54.864 12:10:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:54.864 12:10:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:54.864 12:10:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.123 [2024-11-25 12:10:51.019012] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:11:55.123 [2024-11-25 12:10:51.019475] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61180 ] 00:11:55.123 [2024-11-25 12:10:51.207732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:55.381 [2024-11-25 12:10:51.369300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.640 [2024-11-25 12:10:51.602430] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:55.640 [2024-11-25 12:10:51.602688] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:56.208 12:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:56.208 12:10:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:56.208 12:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:56.208 12:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:56.208 12:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:56.208 12:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:56.208 12:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:56.208 12:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:56.208 12:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:56.208 12:10:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:56.209 12:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:56.209 12:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.209 12:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.209 malloc1 00:11:56.209 12:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.209 12:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:56.209 12:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.209 12:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.209 [2024-11-25 12:10:52.051813] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:56.209 [2024-11-25 12:10:52.052033] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:56.209 [2024-11-25 12:10:52.052116] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:56.209 [2024-11-25 12:10:52.052425] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:56.209 [2024-11-25 12:10:52.055387] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:56.209 [2024-11-25 12:10:52.055558] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:56.209 pt1 00:11:56.209 12:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.209 12:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:56.209 12:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:56.209 12:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:56.209 12:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:56.209 12:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:56.209 12:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:56.209 12:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:56.209 12:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:56.209 12:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:56.209 12:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.209 12:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.209 malloc2 00:11:56.209 12:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.209 12:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:56.209 12:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.209 12:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.209 [2024-11-25 12:10:52.107941] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:56.209 [2024-11-25 12:10:52.108136] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:56.209 [2024-11-25 12:10:52.108213] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:56.209 [2024-11-25 12:10:52.108325] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:56.209 [2024-11-25 12:10:52.111117] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:56.209 [2024-11-25 12:10:52.111275] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:56.209 pt2 00:11:56.209 12:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.209 12:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:56.209 12:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:56.209 12:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:11:56.209 12:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.209 12:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.209 [2024-11-25 12:10:52.120028] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:56.209 [2024-11-25 12:10:52.122434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:56.209 [2024-11-25 12:10:52.122646] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:56.209 [2024-11-25 12:10:52.122665] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:56.209 [2024-11-25 12:10:52.122972] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:56.209 [2024-11-25 12:10:52.123167] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:56.209 [2024-11-25 12:10:52.123190] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:56.209 [2024-11-25 12:10:52.123386] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:56.209 12:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.209 12:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:11:56.209 12:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:56.209 12:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:56.209 12:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:56.209 12:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:56.209 12:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:56.209 12:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.209 12:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.209 12:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.209 12:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.209 12:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.209 12:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:56.209 12:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.209 12:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.209 12:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.209 12:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.209 "name": "raid_bdev1", 00:11:56.209 "uuid": "e6ad6da0-5bac-47fd-a461-0d9ddd2ed48f", 00:11:56.209 "strip_size_kb": 64, 00:11:56.209 "state": "online", 00:11:56.209 "raid_level": "raid0", 00:11:56.209 "superblock": true, 00:11:56.209 "num_base_bdevs": 2, 00:11:56.209 "num_base_bdevs_discovered": 2, 00:11:56.209 "num_base_bdevs_operational": 2, 00:11:56.209 "base_bdevs_list": [ 00:11:56.209 { 00:11:56.209 "name": "pt1", 00:11:56.209 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:56.209 "is_configured": true, 00:11:56.209 "data_offset": 2048, 00:11:56.209 "data_size": 63488 00:11:56.209 }, 00:11:56.209 { 00:11:56.209 "name": "pt2", 00:11:56.209 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:56.209 "is_configured": true, 00:11:56.209 "data_offset": 2048, 00:11:56.209 "data_size": 63488 00:11:56.209 } 00:11:56.209 ] 00:11:56.209 }' 00:11:56.209 12:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.209 12:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.782 12:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:56.782 12:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:56.782 12:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:56.782 12:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:56.782 12:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:56.782 12:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:56.782 12:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:56.782 12:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.782 12:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:56.782 12:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.782 [2024-11-25 12:10:52.644498] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:56.782 12:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.782 12:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:56.782 "name": "raid_bdev1", 00:11:56.782 "aliases": [ 00:11:56.782 "e6ad6da0-5bac-47fd-a461-0d9ddd2ed48f" 00:11:56.782 ], 00:11:56.782 "product_name": "Raid Volume", 00:11:56.782 "block_size": 512, 00:11:56.782 "num_blocks": 126976, 00:11:56.782 "uuid": "e6ad6da0-5bac-47fd-a461-0d9ddd2ed48f", 00:11:56.782 "assigned_rate_limits": { 00:11:56.782 "rw_ios_per_sec": 0, 00:11:56.782 "rw_mbytes_per_sec": 0, 00:11:56.782 "r_mbytes_per_sec": 0, 00:11:56.782 "w_mbytes_per_sec": 0 00:11:56.782 }, 00:11:56.782 "claimed": false, 00:11:56.782 "zoned": false, 00:11:56.782 "supported_io_types": { 00:11:56.782 "read": true, 00:11:56.782 "write": true, 00:11:56.782 "unmap": true, 00:11:56.782 "flush": true, 00:11:56.782 "reset": true, 00:11:56.782 "nvme_admin": false, 00:11:56.782 "nvme_io": false, 00:11:56.782 "nvme_io_md": false, 00:11:56.782 "write_zeroes": true, 00:11:56.782 "zcopy": false, 00:11:56.782 "get_zone_info": false, 00:11:56.782 "zone_management": false, 00:11:56.782 "zone_append": false, 00:11:56.782 "compare": false, 00:11:56.782 "compare_and_write": false, 00:11:56.782 "abort": false, 00:11:56.782 "seek_hole": false, 00:11:56.782 "seek_data": false, 00:11:56.782 "copy": false, 00:11:56.782 "nvme_iov_md": false 00:11:56.782 }, 00:11:56.782 "memory_domains": [ 00:11:56.782 { 00:11:56.782 "dma_device_id": "system", 00:11:56.782 "dma_device_type": 1 00:11:56.782 }, 00:11:56.782 { 00:11:56.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.782 "dma_device_type": 2 00:11:56.782 }, 00:11:56.782 { 00:11:56.782 "dma_device_id": "system", 00:11:56.782 "dma_device_type": 1 00:11:56.782 }, 00:11:56.782 { 00:11:56.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.782 "dma_device_type": 2 00:11:56.782 } 00:11:56.782 ], 00:11:56.782 "driver_specific": { 00:11:56.782 "raid": { 00:11:56.782 "uuid": "e6ad6da0-5bac-47fd-a461-0d9ddd2ed48f", 00:11:56.782 "strip_size_kb": 64, 00:11:56.782 "state": "online", 00:11:56.782 "raid_level": "raid0", 00:11:56.782 "superblock": true, 00:11:56.782 "num_base_bdevs": 2, 00:11:56.782 "num_base_bdevs_discovered": 2, 00:11:56.782 "num_base_bdevs_operational": 2, 00:11:56.782 "base_bdevs_list": [ 00:11:56.782 { 00:11:56.782 "name": "pt1", 00:11:56.782 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:56.782 "is_configured": true, 00:11:56.782 "data_offset": 2048, 00:11:56.782 "data_size": 63488 00:11:56.782 }, 00:11:56.782 { 00:11:56.782 "name": "pt2", 00:11:56.782 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:56.782 "is_configured": true, 00:11:56.782 "data_offset": 2048, 00:11:56.782 "data_size": 63488 00:11:56.782 } 00:11:56.782 ] 00:11:56.782 } 00:11:56.782 } 00:11:56.782 }' 00:11:56.782 12:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:56.782 12:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:56.782 pt2' 00:11:56.782 12:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:56.782 12:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:56.782 12:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:56.782 12:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:56.782 12:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.782 12:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.782 12:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:56.782 12:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.783 12:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:56.783 12:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:56.783 12:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:56.783 12:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:56.783 12:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.783 12:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.783 12:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:56.783 12:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.042 12:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:57.042 12:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:57.042 12:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:57.042 12:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:57.042 12:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.042 12:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.042 [2024-11-25 12:10:52.908573] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:57.042 12:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.042 12:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e6ad6da0-5bac-47fd-a461-0d9ddd2ed48f 00:11:57.042 12:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e6ad6da0-5bac-47fd-a461-0d9ddd2ed48f ']' 00:11:57.042 12:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:57.042 12:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.042 12:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.042 [2024-11-25 12:10:52.956162] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:57.042 [2024-11-25 12:10:52.956321] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:57.042 [2024-11-25 12:10:52.956475] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:57.042 [2024-11-25 12:10:52.956545] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:57.042 [2024-11-25 12:10:52.956565] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:57.042 12:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.042 12:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.042 12:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:57.042 12:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.042 12:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.042 12:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.042 12:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:57.042 12:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:57.042 12:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:57.042 12:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:57.042 12:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.042 12:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.042 12:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.042 12:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:57.042 12:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:57.042 12:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.042 12:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.042 12:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.042 12:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:57.042 12:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:57.042 12:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.042 12:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.042 12:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.042 12:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:57.042 12:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:11:57.042 12:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:57.042 12:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:11:57.042 12:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:57.042 12:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:57.042 12:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:57.042 12:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:57.042 12:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:11:57.042 12:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.042 12:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.042 [2024-11-25 12:10:53.084234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:57.042 [2024-11-25 12:10:53.086831] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:57.042 [2024-11-25 12:10:53.086924] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:57.042 [2024-11-25 12:10:53.087003] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:57.042 [2024-11-25 12:10:53.087038] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:57.042 [2024-11-25 12:10:53.087058] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:57.042 request: 00:11:57.042 { 00:11:57.042 "name": "raid_bdev1", 00:11:57.042 "raid_level": "raid0", 00:11:57.042 "base_bdevs": [ 00:11:57.042 "malloc1", 00:11:57.042 "malloc2" 00:11:57.042 ], 00:11:57.042 "strip_size_kb": 64, 00:11:57.042 "superblock": false, 00:11:57.042 "method": "bdev_raid_create", 00:11:57.042 "req_id": 1 00:11:57.042 } 00:11:57.042 Got JSON-RPC error response 00:11:57.042 response: 00:11:57.042 { 00:11:57.042 "code": -17, 00:11:57.042 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:57.042 } 00:11:57.042 12:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:57.042 12:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:57.042 12:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:57.042 12:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:57.042 12:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:57.042 12:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:57.042 12:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.042 12:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.042 12:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.042 12:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.301 12:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:57.301 12:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:57.301 12:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:57.301 12:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.301 12:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.301 [2024-11-25 12:10:53.140236] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:57.301 [2024-11-25 12:10:53.140450] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.301 [2024-11-25 12:10:53.140596] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:57.301 [2024-11-25 12:10:53.140719] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.301 [2024-11-25 12:10:53.143696] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.301 [2024-11-25 12:10:53.143862] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:57.301 [2024-11-25 12:10:53.144104] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:57.301 [2024-11-25 12:10:53.144286] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:57.301 pt1 00:11:57.301 12:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.301 12:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:11:57.301 12:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:57.301 12:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:57.301 12:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:57.301 12:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:57.301 12:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:57.301 12:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.301 12:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.301 12:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.301 12:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.301 12:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.301 12:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:57.301 12:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.301 12:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.301 12:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.301 12:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.301 "name": "raid_bdev1", 00:11:57.301 "uuid": "e6ad6da0-5bac-47fd-a461-0d9ddd2ed48f", 00:11:57.301 "strip_size_kb": 64, 00:11:57.301 "state": "configuring", 00:11:57.301 "raid_level": "raid0", 00:11:57.301 "superblock": true, 00:11:57.301 "num_base_bdevs": 2, 00:11:57.301 "num_base_bdevs_discovered": 1, 00:11:57.301 "num_base_bdevs_operational": 2, 00:11:57.301 "base_bdevs_list": [ 00:11:57.301 { 00:11:57.301 "name": "pt1", 00:11:57.301 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:57.301 "is_configured": true, 00:11:57.301 "data_offset": 2048, 00:11:57.301 "data_size": 63488 00:11:57.301 }, 00:11:57.301 { 00:11:57.301 "name": null, 00:11:57.301 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:57.301 "is_configured": false, 00:11:57.301 "data_offset": 2048, 00:11:57.301 "data_size": 63488 00:11:57.301 } 00:11:57.301 ] 00:11:57.301 }' 00:11:57.301 12:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.301 12:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.868 12:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:11:57.868 12:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:57.868 12:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:57.868 12:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:57.868 12:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.868 12:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.868 [2024-11-25 12:10:53.680806] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:57.868 [2024-11-25 12:10:53.680899] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.868 [2024-11-25 12:10:53.680932] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:57.868 [2024-11-25 12:10:53.680949] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.868 [2024-11-25 12:10:53.681554] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.868 [2024-11-25 12:10:53.681603] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:57.868 [2024-11-25 12:10:53.681706] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:57.868 [2024-11-25 12:10:53.681745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:57.868 [2024-11-25 12:10:53.681889] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:57.868 [2024-11-25 12:10:53.681912] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:57.868 [2024-11-25 12:10:53.682243] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:57.868 [2024-11-25 12:10:53.682453] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:57.868 [2024-11-25 12:10:53.682476] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:57.868 [2024-11-25 12:10:53.682646] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:57.868 pt2 00:11:57.868 12:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.868 12:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:57.868 12:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:57.868 12:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:11:57.868 12:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:57.868 12:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:57.868 12:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:57.868 12:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:57.868 12:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:57.868 12:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.868 12:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.868 12:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.868 12:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.868 12:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.868 12:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:57.868 12:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.868 12:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.868 12:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.868 12:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.868 "name": "raid_bdev1", 00:11:57.868 "uuid": "e6ad6da0-5bac-47fd-a461-0d9ddd2ed48f", 00:11:57.868 "strip_size_kb": 64, 00:11:57.868 "state": "online", 00:11:57.868 "raid_level": "raid0", 00:11:57.868 "superblock": true, 00:11:57.868 "num_base_bdevs": 2, 00:11:57.868 "num_base_bdevs_discovered": 2, 00:11:57.868 "num_base_bdevs_operational": 2, 00:11:57.868 "base_bdevs_list": [ 00:11:57.868 { 00:11:57.868 "name": "pt1", 00:11:57.868 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:57.868 "is_configured": true, 00:11:57.868 "data_offset": 2048, 00:11:57.868 "data_size": 63488 00:11:57.868 }, 00:11:57.868 { 00:11:57.868 "name": "pt2", 00:11:57.868 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:57.868 "is_configured": true, 00:11:57.868 "data_offset": 2048, 00:11:57.868 "data_size": 63488 00:11:57.868 } 00:11:57.868 ] 00:11:57.868 }' 00:11:57.868 12:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.868 12:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.436 12:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:58.436 12:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:58.436 12:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:58.436 12:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:58.436 12:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:58.436 12:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:58.436 12:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:58.436 12:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.436 12:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.436 12:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:58.436 [2024-11-25 12:10:54.229251] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:58.436 12:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.436 12:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:58.436 "name": "raid_bdev1", 00:11:58.436 "aliases": [ 00:11:58.436 "e6ad6da0-5bac-47fd-a461-0d9ddd2ed48f" 00:11:58.436 ], 00:11:58.436 "product_name": "Raid Volume", 00:11:58.436 "block_size": 512, 00:11:58.436 "num_blocks": 126976, 00:11:58.436 "uuid": "e6ad6da0-5bac-47fd-a461-0d9ddd2ed48f", 00:11:58.436 "assigned_rate_limits": { 00:11:58.436 "rw_ios_per_sec": 0, 00:11:58.436 "rw_mbytes_per_sec": 0, 00:11:58.436 "r_mbytes_per_sec": 0, 00:11:58.436 "w_mbytes_per_sec": 0 00:11:58.436 }, 00:11:58.436 "claimed": false, 00:11:58.436 "zoned": false, 00:11:58.436 "supported_io_types": { 00:11:58.436 "read": true, 00:11:58.436 "write": true, 00:11:58.436 "unmap": true, 00:11:58.436 "flush": true, 00:11:58.436 "reset": true, 00:11:58.436 "nvme_admin": false, 00:11:58.436 "nvme_io": false, 00:11:58.436 "nvme_io_md": false, 00:11:58.436 "write_zeroes": true, 00:11:58.436 "zcopy": false, 00:11:58.436 "get_zone_info": false, 00:11:58.436 "zone_management": false, 00:11:58.436 "zone_append": false, 00:11:58.436 "compare": false, 00:11:58.436 "compare_and_write": false, 00:11:58.436 "abort": false, 00:11:58.436 "seek_hole": false, 00:11:58.436 "seek_data": false, 00:11:58.436 "copy": false, 00:11:58.436 "nvme_iov_md": false 00:11:58.436 }, 00:11:58.436 "memory_domains": [ 00:11:58.436 { 00:11:58.436 "dma_device_id": "system", 00:11:58.436 "dma_device_type": 1 00:11:58.436 }, 00:11:58.436 { 00:11:58.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.436 "dma_device_type": 2 00:11:58.436 }, 00:11:58.436 { 00:11:58.436 "dma_device_id": "system", 00:11:58.436 "dma_device_type": 1 00:11:58.436 }, 00:11:58.436 { 00:11:58.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.436 "dma_device_type": 2 00:11:58.436 } 00:11:58.436 ], 00:11:58.436 "driver_specific": { 00:11:58.436 "raid": { 00:11:58.436 "uuid": "e6ad6da0-5bac-47fd-a461-0d9ddd2ed48f", 00:11:58.436 "strip_size_kb": 64, 00:11:58.436 "state": "online", 00:11:58.436 "raid_level": "raid0", 00:11:58.436 "superblock": true, 00:11:58.436 "num_base_bdevs": 2, 00:11:58.436 "num_base_bdevs_discovered": 2, 00:11:58.436 "num_base_bdevs_operational": 2, 00:11:58.436 "base_bdevs_list": [ 00:11:58.436 { 00:11:58.436 "name": "pt1", 00:11:58.436 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:58.436 "is_configured": true, 00:11:58.436 "data_offset": 2048, 00:11:58.436 "data_size": 63488 00:11:58.436 }, 00:11:58.436 { 00:11:58.436 "name": "pt2", 00:11:58.436 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:58.436 "is_configured": true, 00:11:58.436 "data_offset": 2048, 00:11:58.436 "data_size": 63488 00:11:58.436 } 00:11:58.436 ] 00:11:58.436 } 00:11:58.436 } 00:11:58.436 }' 00:11:58.436 12:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:58.436 12:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:58.436 pt2' 00:11:58.436 12:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:58.436 12:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:58.436 12:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:58.436 12:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:58.436 12:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:58.436 12:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.436 12:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.436 12:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.436 12:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:58.436 12:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:58.436 12:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:58.436 12:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:58.436 12:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:58.436 12:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.436 12:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.436 12:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.436 12:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:58.436 12:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:58.436 12:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:58.436 12:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.436 12:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.436 12:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:58.436 [2024-11-25 12:10:54.485270] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:58.436 12:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.695 12:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e6ad6da0-5bac-47fd-a461-0d9ddd2ed48f '!=' e6ad6da0-5bac-47fd-a461-0d9ddd2ed48f ']' 00:11:58.695 12:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:11:58.695 12:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:58.695 12:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:58.695 12:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61180 00:11:58.695 12:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61180 ']' 00:11:58.695 12:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61180 00:11:58.695 12:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:58.695 12:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:58.695 12:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61180 00:11:58.695 12:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:58.695 12:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:58.695 12:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61180' 00:11:58.695 killing process with pid 61180 00:11:58.695 12:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61180 00:11:58.695 [2024-11-25 12:10:54.563532] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:58.695 12:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61180 00:11:58.695 [2024-11-25 12:10:54.563818] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:58.695 [2024-11-25 12:10:54.564000] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:58.695 [2024-11-25 12:10:54.564183] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:58.695 [2024-11-25 12:10:54.752736] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:59.706 12:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:59.706 00:11:59.706 real 0m4.892s 00:11:59.706 user 0m7.174s 00:11:59.706 sys 0m0.770s 00:11:59.706 12:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:59.706 12:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.706 ************************************ 00:11:59.706 END TEST raid_superblock_test 00:11:59.706 ************************************ 00:11:59.964 12:10:55 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:11:59.965 12:10:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:59.965 12:10:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:59.965 12:10:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:59.965 ************************************ 00:11:59.965 START TEST raid_read_error_test 00:11:59.965 ************************************ 00:11:59.965 12:10:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:11:59.965 12:10:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:59.965 12:10:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:11:59.965 12:10:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:59.965 12:10:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:59.965 12:10:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:59.965 12:10:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:59.965 12:10:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:59.965 12:10:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:59.965 12:10:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:59.965 12:10:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:59.965 12:10:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:59.965 12:10:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:59.965 12:10:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:59.965 12:10:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:59.965 12:10:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:59.965 12:10:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:59.965 12:10:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:59.965 12:10:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:59.965 12:10:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:59.965 12:10:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:59.965 12:10:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:59.965 12:10:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:59.965 12:10:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.wOwHCYeDs3 00:11:59.965 12:10:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61392 00:11:59.965 12:10:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:59.965 12:10:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61392 00:11:59.965 12:10:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61392 ']' 00:11:59.965 12:10:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:59.965 12:10:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:59.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:59.965 12:10:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:59.965 12:10:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:59.965 12:10:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.965 [2024-11-25 12:10:55.936435] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:11:59.965 [2024-11-25 12:10:55.936598] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61392 ] 00:12:00.223 [2024-11-25 12:10:56.120698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:00.223 [2024-11-25 12:10:56.281940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.481 [2024-11-25 12:10:56.506418] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:00.481 [2024-11-25 12:10:56.506470] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:01.049 12:10:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:01.049 12:10:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:01.049 12:10:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:01.049 12:10:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:01.049 12:10:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.049 12:10:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.049 BaseBdev1_malloc 00:12:01.049 12:10:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.049 12:10:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:01.049 12:10:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.049 12:10:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.049 true 00:12:01.049 12:10:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.049 12:10:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:01.049 12:10:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.049 12:10:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.049 [2024-11-25 12:10:56.980167] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:01.049 [2024-11-25 12:10:56.980404] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:01.049 [2024-11-25 12:10:56.980449] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:01.049 [2024-11-25 12:10:56.980470] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:01.049 [2024-11-25 12:10:56.983352] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:01.049 [2024-11-25 12:10:56.983397] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:01.049 BaseBdev1 00:12:01.049 12:10:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.049 12:10:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:01.049 12:10:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:01.049 12:10:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.049 12:10:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.049 BaseBdev2_malloc 00:12:01.049 12:10:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.049 12:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:01.049 12:10:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.049 12:10:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.049 true 00:12:01.049 12:10:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.049 12:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:01.049 12:10:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.049 12:10:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.049 [2024-11-25 12:10:57.037375] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:01.049 [2024-11-25 12:10:57.037444] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:01.049 [2024-11-25 12:10:57.037474] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:01.049 [2024-11-25 12:10:57.037492] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:01.049 [2024-11-25 12:10:57.040417] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:01.049 [2024-11-25 12:10:57.040467] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:01.049 BaseBdev2 00:12:01.049 12:10:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.049 12:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:12:01.050 12:10:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.050 12:10:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.050 [2024-11-25 12:10:57.045449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:01.050 [2024-11-25 12:10:57.048153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:01.050 [2024-11-25 12:10:57.048469] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:01.050 [2024-11-25 12:10:57.048496] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:01.050 [2024-11-25 12:10:57.048785] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:01.050 [2024-11-25 12:10:57.049024] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:01.050 [2024-11-25 12:10:57.049091] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:01.050 [2024-11-25 12:10:57.049357] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:01.050 12:10:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.050 12:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:12:01.050 12:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:01.050 12:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:01.050 12:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:01.050 12:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:01.050 12:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:01.050 12:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.050 12:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.050 12:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.050 12:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.050 12:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.050 12:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:01.050 12:10:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.050 12:10:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.050 12:10:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.050 12:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.050 "name": "raid_bdev1", 00:12:01.050 "uuid": "d08ccb9a-3d36-419e-88b8-6e3ab65ddbba", 00:12:01.050 "strip_size_kb": 64, 00:12:01.050 "state": "online", 00:12:01.050 "raid_level": "raid0", 00:12:01.050 "superblock": true, 00:12:01.050 "num_base_bdevs": 2, 00:12:01.050 "num_base_bdevs_discovered": 2, 00:12:01.050 "num_base_bdevs_operational": 2, 00:12:01.050 "base_bdevs_list": [ 00:12:01.050 { 00:12:01.050 "name": "BaseBdev1", 00:12:01.050 "uuid": "0d792a25-40a1-5168-a653-ba56cc9b79af", 00:12:01.050 "is_configured": true, 00:12:01.050 "data_offset": 2048, 00:12:01.050 "data_size": 63488 00:12:01.050 }, 00:12:01.050 { 00:12:01.050 "name": "BaseBdev2", 00:12:01.050 "uuid": "68b56600-70cc-5976-8235-17407ab71a69", 00:12:01.050 "is_configured": true, 00:12:01.050 "data_offset": 2048, 00:12:01.050 "data_size": 63488 00:12:01.050 } 00:12:01.050 ] 00:12:01.050 }' 00:12:01.050 12:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.050 12:10:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.617 12:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:01.617 12:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:01.617 [2024-11-25 12:10:57.663123] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:02.553 12:10:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:02.553 12:10:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.553 12:10:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.553 12:10:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.553 12:10:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:02.553 12:10:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:12:02.553 12:10:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:12:02.553 12:10:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:12:02.553 12:10:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:02.553 12:10:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:02.553 12:10:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:02.553 12:10:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:02.553 12:10:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:02.553 12:10:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.553 12:10:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.553 12:10:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.553 12:10:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.553 12:10:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.553 12:10:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:02.553 12:10:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.553 12:10:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.553 12:10:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.553 12:10:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.553 "name": "raid_bdev1", 00:12:02.553 "uuid": "d08ccb9a-3d36-419e-88b8-6e3ab65ddbba", 00:12:02.553 "strip_size_kb": 64, 00:12:02.553 "state": "online", 00:12:02.553 "raid_level": "raid0", 00:12:02.553 "superblock": true, 00:12:02.553 "num_base_bdevs": 2, 00:12:02.553 "num_base_bdevs_discovered": 2, 00:12:02.553 "num_base_bdevs_operational": 2, 00:12:02.553 "base_bdevs_list": [ 00:12:02.553 { 00:12:02.553 "name": "BaseBdev1", 00:12:02.553 "uuid": "0d792a25-40a1-5168-a653-ba56cc9b79af", 00:12:02.553 "is_configured": true, 00:12:02.553 "data_offset": 2048, 00:12:02.553 "data_size": 63488 00:12:02.553 }, 00:12:02.553 { 00:12:02.553 "name": "BaseBdev2", 00:12:02.553 "uuid": "68b56600-70cc-5976-8235-17407ab71a69", 00:12:02.553 "is_configured": true, 00:12:02.553 "data_offset": 2048, 00:12:02.553 "data_size": 63488 00:12:02.553 } 00:12:02.553 ] 00:12:02.553 }' 00:12:02.553 12:10:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.553 12:10:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.121 12:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:03.121 12:10:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.121 12:10:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.121 [2024-11-25 12:10:59.068405] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:03.121 [2024-11-25 12:10:59.068614] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:03.121 [2024-11-25 12:10:59.072561] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:03.121 [2024-11-25 12:10:59.072794] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:03.121 { 00:12:03.121 "results": [ 00:12:03.121 { 00:12:03.121 "job": "raid_bdev1", 00:12:03.121 "core_mask": "0x1", 00:12:03.121 "workload": "randrw", 00:12:03.121 "percentage": 50, 00:12:03.121 "status": "finished", 00:12:03.121 "queue_depth": 1, 00:12:03.121 "io_size": 131072, 00:12:03.121 "runtime": 1.403111, 00:12:03.121 "iops": 10731.153843138569, 00:12:03.121 "mibps": 1341.394230392321, 00:12:03.121 "io_failed": 1, 00:12:03.121 "io_timeout": 0, 00:12:03.121 "avg_latency_us": 130.5935304700612, 00:12:03.121 "min_latency_us": 42.123636363636365, 00:12:03.121 "max_latency_us": 1854.370909090909 00:12:03.121 } 00:12:03.121 ], 00:12:03.121 "core_count": 1 00:12:03.121 } 00:12:03.121 [2024-11-25 12:10:59.072991] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:03.121 [2024-11-25 12:10:59.073025] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:03.121 12:10:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.121 12:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61392 00:12:03.121 12:10:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61392 ']' 00:12:03.121 12:10:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61392 00:12:03.121 12:10:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:12:03.121 12:10:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:03.121 12:10:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61392 00:12:03.121 12:10:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:03.121 12:10:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:03.121 12:10:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61392' 00:12:03.121 killing process with pid 61392 00:12:03.121 12:10:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61392 00:12:03.121 [2024-11-25 12:10:59.115095] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:03.121 12:10:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61392 00:12:03.447 [2024-11-25 12:10:59.246877] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:04.388 12:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.wOwHCYeDs3 00:12:04.388 12:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:04.388 12:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:04.388 12:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:12:04.388 12:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:12:04.388 12:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:04.388 12:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:04.388 12:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:12:04.388 00:12:04.388 real 0m4.538s 00:12:04.388 user 0m5.657s 00:12:04.388 sys 0m0.542s 00:12:04.388 12:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:04.388 ************************************ 00:12:04.388 END TEST raid_read_error_test 00:12:04.388 ************************************ 00:12:04.388 12:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.388 12:11:00 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:12:04.388 12:11:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:04.388 12:11:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:04.388 12:11:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:04.388 ************************************ 00:12:04.388 START TEST raid_write_error_test 00:12:04.388 ************************************ 00:12:04.388 12:11:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:12:04.388 12:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:12:04.388 12:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:12:04.388 12:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:04.388 12:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:04.388 12:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:04.388 12:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:04.388 12:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:04.388 12:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:04.388 12:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:04.388 12:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:04.388 12:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:04.388 12:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:04.388 12:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:04.388 12:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:04.388 12:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:04.388 12:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:04.388 12:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:04.388 12:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:04.388 12:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:12:04.388 12:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:04.388 12:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:04.388 12:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:04.388 12:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.K172lN6r4I 00:12:04.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:04.388 12:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61545 00:12:04.388 12:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61545 00:12:04.388 12:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:04.388 12:11:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61545 ']' 00:12:04.388 12:11:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:04.388 12:11:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:04.388 12:11:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:04.388 12:11:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:04.388 12:11:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.647 [2024-11-25 12:11:00.568069] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:12:04.648 [2024-11-25 12:11:00.568479] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61545 ] 00:12:04.907 [2024-11-25 12:11:00.753947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:04.907 [2024-11-25 12:11:00.912059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.165 [2024-11-25 12:11:01.132687] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:05.165 [2024-11-25 12:11:01.132995] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:05.731 12:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:05.731 12:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:05.731 12:11:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:05.731 12:11:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:05.731 12:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.731 12:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.731 BaseBdev1_malloc 00:12:05.731 12:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.731 12:11:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:05.731 12:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.731 12:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.731 true 00:12:05.731 12:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.731 12:11:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:05.731 12:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.731 12:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.731 [2024-11-25 12:11:01.606650] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:05.731 [2024-11-25 12:11:01.606852] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.731 [2024-11-25 12:11:01.606897] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:05.731 [2024-11-25 12:11:01.606916] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.731 [2024-11-25 12:11:01.609729] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.731 [2024-11-25 12:11:01.609778] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:05.731 BaseBdev1 00:12:05.731 12:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.731 12:11:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:05.731 12:11:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:05.731 12:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.731 12:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.731 BaseBdev2_malloc 00:12:05.731 12:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.731 12:11:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:05.731 12:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.731 12:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.731 true 00:12:05.732 12:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.732 12:11:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:05.732 12:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.732 12:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.732 [2024-11-25 12:11:01.663058] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:05.732 [2024-11-25 12:11:01.663128] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.732 [2024-11-25 12:11:01.663157] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:05.732 [2024-11-25 12:11:01.663175] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.732 [2024-11-25 12:11:01.665936] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.732 [2024-11-25 12:11:01.665995] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:05.732 BaseBdev2 00:12:05.732 12:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.732 12:11:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:12:05.732 12:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.732 12:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.732 [2024-11-25 12:11:01.671146] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:05.732 [2024-11-25 12:11:01.673649] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:05.732 [2024-11-25 12:11:01.673906] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:05.732 [2024-11-25 12:11:01.673932] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:05.732 [2024-11-25 12:11:01.674242] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:05.732 [2024-11-25 12:11:01.674489] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:05.732 [2024-11-25 12:11:01.674509] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:05.732 [2024-11-25 12:11:01.674705] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:05.732 12:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.732 12:11:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:12:05.732 12:11:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:05.732 12:11:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:05.732 12:11:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:05.732 12:11:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:05.732 12:11:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:05.732 12:11:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.732 12:11:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.732 12:11:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.732 12:11:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.732 12:11:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.732 12:11:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.732 12:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.732 12:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.732 12:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.732 12:11:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.732 "name": "raid_bdev1", 00:12:05.732 "uuid": "1cbc46bd-10e0-4b58-883e-e06aaac789fb", 00:12:05.732 "strip_size_kb": 64, 00:12:05.732 "state": "online", 00:12:05.732 "raid_level": "raid0", 00:12:05.732 "superblock": true, 00:12:05.732 "num_base_bdevs": 2, 00:12:05.732 "num_base_bdevs_discovered": 2, 00:12:05.732 "num_base_bdevs_operational": 2, 00:12:05.732 "base_bdevs_list": [ 00:12:05.732 { 00:12:05.732 "name": "BaseBdev1", 00:12:05.732 "uuid": "3c49eb61-6c18-57af-8440-42c1b894e8e9", 00:12:05.732 "is_configured": true, 00:12:05.732 "data_offset": 2048, 00:12:05.732 "data_size": 63488 00:12:05.732 }, 00:12:05.732 { 00:12:05.732 "name": "BaseBdev2", 00:12:05.732 "uuid": "a0fe7882-9db3-5e2d-983c-f4e33f62754b", 00:12:05.732 "is_configured": true, 00:12:05.732 "data_offset": 2048, 00:12:05.732 "data_size": 63488 00:12:05.732 } 00:12:05.732 ] 00:12:05.732 }' 00:12:05.732 12:11:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.732 12:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.298 12:11:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:06.298 12:11:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:06.298 [2024-11-25 12:11:02.292791] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:07.234 12:11:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:07.234 12:11:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.234 12:11:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.234 12:11:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.234 12:11:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:07.234 12:11:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:12:07.234 12:11:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:12:07.234 12:11:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:12:07.234 12:11:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:07.234 12:11:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:07.234 12:11:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:07.234 12:11:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:07.234 12:11:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:07.234 12:11:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.234 12:11:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.234 12:11:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.234 12:11:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.234 12:11:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.234 12:11:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:07.234 12:11:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.234 12:11:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.234 12:11:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.234 12:11:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.234 "name": "raid_bdev1", 00:12:07.234 "uuid": "1cbc46bd-10e0-4b58-883e-e06aaac789fb", 00:12:07.234 "strip_size_kb": 64, 00:12:07.234 "state": "online", 00:12:07.234 "raid_level": "raid0", 00:12:07.234 "superblock": true, 00:12:07.234 "num_base_bdevs": 2, 00:12:07.234 "num_base_bdevs_discovered": 2, 00:12:07.234 "num_base_bdevs_operational": 2, 00:12:07.234 "base_bdevs_list": [ 00:12:07.234 { 00:12:07.234 "name": "BaseBdev1", 00:12:07.234 "uuid": "3c49eb61-6c18-57af-8440-42c1b894e8e9", 00:12:07.234 "is_configured": true, 00:12:07.234 "data_offset": 2048, 00:12:07.234 "data_size": 63488 00:12:07.234 }, 00:12:07.234 { 00:12:07.234 "name": "BaseBdev2", 00:12:07.234 "uuid": "a0fe7882-9db3-5e2d-983c-f4e33f62754b", 00:12:07.234 "is_configured": true, 00:12:07.234 "data_offset": 2048, 00:12:07.234 "data_size": 63488 00:12:07.234 } 00:12:07.234 ] 00:12:07.234 }' 00:12:07.234 12:11:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.234 12:11:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.801 12:11:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:07.801 12:11:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.801 12:11:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.801 [2024-11-25 12:11:03.700231] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:07.801 [2024-11-25 12:11:03.700274] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:07.801 [2024-11-25 12:11:03.703732] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:07.801 [2024-11-25 12:11:03.703791] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:07.801 [2024-11-25 12:11:03.703836] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:07.801 [2024-11-25 12:11:03.703854] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:07.801 { 00:12:07.801 "results": [ 00:12:07.801 { 00:12:07.801 "job": "raid_bdev1", 00:12:07.801 "core_mask": "0x1", 00:12:07.801 "workload": "randrw", 00:12:07.801 "percentage": 50, 00:12:07.801 "status": "finished", 00:12:07.801 "queue_depth": 1, 00:12:07.801 "io_size": 131072, 00:12:07.801 "runtime": 1.404679, 00:12:07.801 "iops": 10927.051660913276, 00:12:07.801 "mibps": 1365.8814576141594, 00:12:07.801 "io_failed": 1, 00:12:07.801 "io_timeout": 0, 00:12:07.801 "avg_latency_us": 127.96090636659757, 00:12:07.801 "min_latency_us": 41.89090909090909, 00:12:07.801 "max_latency_us": 2353.338181818182 00:12:07.801 } 00:12:07.801 ], 00:12:07.801 "core_count": 1 00:12:07.801 } 00:12:07.801 12:11:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.801 12:11:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61545 00:12:07.801 12:11:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61545 ']' 00:12:07.801 12:11:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61545 00:12:07.801 12:11:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:07.801 12:11:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:07.801 12:11:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61545 00:12:07.801 killing process with pid 61545 00:12:07.801 12:11:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:07.801 12:11:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:07.801 12:11:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61545' 00:12:07.801 12:11:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61545 00:12:07.801 [2024-11-25 12:11:03.741152] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:07.801 12:11:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61545 00:12:07.801 [2024-11-25 12:11:03.864134] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:09.212 12:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.K172lN6r4I 00:12:09.212 12:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:09.212 12:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:09.212 12:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:12:09.212 12:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:12:09.212 12:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:09.212 12:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:09.212 12:11:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:12:09.212 00:12:09.212 real 0m4.551s 00:12:09.212 user 0m5.701s 00:12:09.212 sys 0m0.547s 00:12:09.212 12:11:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:09.212 12:11:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.212 ************************************ 00:12:09.212 END TEST raid_write_error_test 00:12:09.212 ************************************ 00:12:09.212 12:11:05 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:09.212 12:11:05 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:12:09.212 12:11:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:09.212 12:11:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:09.212 12:11:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:09.212 ************************************ 00:12:09.212 START TEST raid_state_function_test 00:12:09.212 ************************************ 00:12:09.212 12:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:12:09.212 12:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:12:09.212 12:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:12:09.212 12:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:09.212 12:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:09.212 12:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:09.212 12:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:09.212 12:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:09.212 12:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:09.212 12:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:09.212 12:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:09.212 12:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:09.212 12:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:09.212 12:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:09.212 12:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:09.212 12:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:09.212 12:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:09.212 12:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:09.212 12:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:09.212 12:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:12:09.212 12:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:09.212 12:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:09.212 12:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:09.212 12:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:09.212 Process raid pid: 61683 00:12:09.212 12:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61683 00:12:09.212 12:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:09.212 12:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61683' 00:12:09.212 12:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61683 00:12:09.212 12:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61683 ']' 00:12:09.212 12:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:09.212 12:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:09.212 12:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:09.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:09.212 12:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:09.212 12:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.212 [2024-11-25 12:11:05.142585] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:12:09.212 [2024-11-25 12:11:05.143660] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:09.471 [2024-11-25 12:11:05.342446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:09.471 [2024-11-25 12:11:05.499136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.729 [2024-11-25 12:11:05.707710] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:09.729 [2024-11-25 12:11:05.707764] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:10.296 12:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:10.296 12:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:12:10.296 12:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:10.296 12:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.296 12:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.296 [2024-11-25 12:11:06.139104] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:10.296 [2024-11-25 12:11:06.139330] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:10.296 [2024-11-25 12:11:06.139373] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:10.296 [2024-11-25 12:11:06.139398] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:10.296 12:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.296 12:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:12:10.296 12:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:10.296 12:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:10.296 12:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:10.296 12:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:10.296 12:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:10.296 12:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.296 12:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.296 12:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.296 12:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.296 12:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.296 12:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:10.296 12:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.296 12:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.296 12:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.296 12:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.296 "name": "Existed_Raid", 00:12:10.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.296 "strip_size_kb": 64, 00:12:10.296 "state": "configuring", 00:12:10.296 "raid_level": "concat", 00:12:10.296 "superblock": false, 00:12:10.296 "num_base_bdevs": 2, 00:12:10.296 "num_base_bdevs_discovered": 0, 00:12:10.296 "num_base_bdevs_operational": 2, 00:12:10.296 "base_bdevs_list": [ 00:12:10.296 { 00:12:10.296 "name": "BaseBdev1", 00:12:10.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.296 "is_configured": false, 00:12:10.296 "data_offset": 0, 00:12:10.296 "data_size": 0 00:12:10.296 }, 00:12:10.296 { 00:12:10.296 "name": "BaseBdev2", 00:12:10.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.296 "is_configured": false, 00:12:10.296 "data_offset": 0, 00:12:10.296 "data_size": 0 00:12:10.296 } 00:12:10.296 ] 00:12:10.296 }' 00:12:10.296 12:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.296 12:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.554 12:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:10.554 12:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.554 12:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.813 [2024-11-25 12:11:06.647205] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:10.813 [2024-11-25 12:11:06.647310] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:10.813 12:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.813 12:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:10.813 12:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.813 12:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.813 [2024-11-25 12:11:06.655163] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:10.813 [2024-11-25 12:11:06.655380] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:10.813 [2024-11-25 12:11:06.655554] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:10.813 [2024-11-25 12:11:06.655623] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:10.813 12:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.813 12:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:10.813 12:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.813 12:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.813 [2024-11-25 12:11:06.702039] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:10.813 BaseBdev1 00:12:10.813 12:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.813 12:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:10.813 12:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:10.813 12:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:10.813 12:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:10.813 12:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:10.813 12:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:10.813 12:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:10.813 12:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.813 12:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.813 12:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.813 12:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:10.813 12:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.813 12:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.813 [ 00:12:10.813 { 00:12:10.813 "name": "BaseBdev1", 00:12:10.813 "aliases": [ 00:12:10.813 "e7256d02-3a78-4c04-a202-cfe08b89c5df" 00:12:10.813 ], 00:12:10.813 "product_name": "Malloc disk", 00:12:10.813 "block_size": 512, 00:12:10.813 "num_blocks": 65536, 00:12:10.813 "uuid": "e7256d02-3a78-4c04-a202-cfe08b89c5df", 00:12:10.813 "assigned_rate_limits": { 00:12:10.813 "rw_ios_per_sec": 0, 00:12:10.813 "rw_mbytes_per_sec": 0, 00:12:10.813 "r_mbytes_per_sec": 0, 00:12:10.813 "w_mbytes_per_sec": 0 00:12:10.813 }, 00:12:10.813 "claimed": true, 00:12:10.813 "claim_type": "exclusive_write", 00:12:10.813 "zoned": false, 00:12:10.813 "supported_io_types": { 00:12:10.813 "read": true, 00:12:10.813 "write": true, 00:12:10.813 "unmap": true, 00:12:10.813 "flush": true, 00:12:10.813 "reset": true, 00:12:10.813 "nvme_admin": false, 00:12:10.813 "nvme_io": false, 00:12:10.813 "nvme_io_md": false, 00:12:10.813 "write_zeroes": true, 00:12:10.813 "zcopy": true, 00:12:10.813 "get_zone_info": false, 00:12:10.813 "zone_management": false, 00:12:10.813 "zone_append": false, 00:12:10.813 "compare": false, 00:12:10.813 "compare_and_write": false, 00:12:10.813 "abort": true, 00:12:10.813 "seek_hole": false, 00:12:10.813 "seek_data": false, 00:12:10.813 "copy": true, 00:12:10.813 "nvme_iov_md": false 00:12:10.813 }, 00:12:10.813 "memory_domains": [ 00:12:10.813 { 00:12:10.813 "dma_device_id": "system", 00:12:10.813 "dma_device_type": 1 00:12:10.813 }, 00:12:10.813 { 00:12:10.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.813 "dma_device_type": 2 00:12:10.813 } 00:12:10.813 ], 00:12:10.813 "driver_specific": {} 00:12:10.813 } 00:12:10.813 ] 00:12:10.813 12:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.813 12:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:10.813 12:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:12:10.813 12:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:10.813 12:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:10.813 12:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:10.813 12:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:10.813 12:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:10.813 12:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.813 12:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.813 12:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.813 12:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.813 12:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.813 12:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:10.813 12:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.813 12:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.813 12:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.813 12:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.813 "name": "Existed_Raid", 00:12:10.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.813 "strip_size_kb": 64, 00:12:10.813 "state": "configuring", 00:12:10.813 "raid_level": "concat", 00:12:10.813 "superblock": false, 00:12:10.813 "num_base_bdevs": 2, 00:12:10.813 "num_base_bdevs_discovered": 1, 00:12:10.813 "num_base_bdevs_operational": 2, 00:12:10.813 "base_bdevs_list": [ 00:12:10.813 { 00:12:10.813 "name": "BaseBdev1", 00:12:10.813 "uuid": "e7256d02-3a78-4c04-a202-cfe08b89c5df", 00:12:10.813 "is_configured": true, 00:12:10.813 "data_offset": 0, 00:12:10.813 "data_size": 65536 00:12:10.813 }, 00:12:10.813 { 00:12:10.813 "name": "BaseBdev2", 00:12:10.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.813 "is_configured": false, 00:12:10.813 "data_offset": 0, 00:12:10.813 "data_size": 0 00:12:10.813 } 00:12:10.813 ] 00:12:10.813 }' 00:12:10.814 12:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.814 12:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.381 12:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:11.381 12:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.381 12:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.381 [2024-11-25 12:11:07.242244] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:11.381 [2024-11-25 12:11:07.242450] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:11.381 12:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.381 12:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:11.381 12:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.381 12:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.381 [2024-11-25 12:11:07.254284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:11.381 [2024-11-25 12:11:07.256953] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:11.381 [2024-11-25 12:11:07.257169] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:11.381 12:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.381 12:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:11.381 12:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:11.381 12:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:12:11.381 12:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:11.381 12:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:11.381 12:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:11.381 12:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:11.381 12:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:11.381 12:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.381 12:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.381 12:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.381 12:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.381 12:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.381 12:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.381 12:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.381 12:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:11.381 12:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.381 12:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.381 "name": "Existed_Raid", 00:12:11.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.381 "strip_size_kb": 64, 00:12:11.381 "state": "configuring", 00:12:11.381 "raid_level": "concat", 00:12:11.381 "superblock": false, 00:12:11.381 "num_base_bdevs": 2, 00:12:11.381 "num_base_bdevs_discovered": 1, 00:12:11.381 "num_base_bdevs_operational": 2, 00:12:11.381 "base_bdevs_list": [ 00:12:11.381 { 00:12:11.381 "name": "BaseBdev1", 00:12:11.381 "uuid": "e7256d02-3a78-4c04-a202-cfe08b89c5df", 00:12:11.381 "is_configured": true, 00:12:11.381 "data_offset": 0, 00:12:11.381 "data_size": 65536 00:12:11.381 }, 00:12:11.381 { 00:12:11.381 "name": "BaseBdev2", 00:12:11.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.381 "is_configured": false, 00:12:11.381 "data_offset": 0, 00:12:11.381 "data_size": 0 00:12:11.381 } 00:12:11.381 ] 00:12:11.381 }' 00:12:11.381 12:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.381 12:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.949 12:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:11.949 12:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.949 12:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.949 [2024-11-25 12:11:07.797318] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:11.949 [2024-11-25 12:11:07.797375] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:11.949 [2024-11-25 12:11:07.797447] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:12:11.949 [2024-11-25 12:11:07.797802] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:11.949 [2024-11-25 12:11:07.798022] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:11.949 [2024-11-25 12:11:07.798053] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:11.949 [2024-11-25 12:11:07.798379] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:11.949 BaseBdev2 00:12:11.949 12:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.949 12:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:11.949 12:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:11.949 12:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:11.949 12:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:11.949 12:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:11.949 12:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:11.949 12:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:11.949 12:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.949 12:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.949 12:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.949 12:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:11.949 12:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.949 12:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.949 [ 00:12:11.949 { 00:12:11.949 "name": "BaseBdev2", 00:12:11.949 "aliases": [ 00:12:11.949 "9fbc8c92-6a4d-46dc-bbcf-6afb81ab6539" 00:12:11.949 ], 00:12:11.949 "product_name": "Malloc disk", 00:12:11.949 "block_size": 512, 00:12:11.949 "num_blocks": 65536, 00:12:11.949 "uuid": "9fbc8c92-6a4d-46dc-bbcf-6afb81ab6539", 00:12:11.949 "assigned_rate_limits": { 00:12:11.949 "rw_ios_per_sec": 0, 00:12:11.949 "rw_mbytes_per_sec": 0, 00:12:11.949 "r_mbytes_per_sec": 0, 00:12:11.949 "w_mbytes_per_sec": 0 00:12:11.949 }, 00:12:11.949 "claimed": true, 00:12:11.949 "claim_type": "exclusive_write", 00:12:11.949 "zoned": false, 00:12:11.949 "supported_io_types": { 00:12:11.949 "read": true, 00:12:11.949 "write": true, 00:12:11.949 "unmap": true, 00:12:11.949 "flush": true, 00:12:11.949 "reset": true, 00:12:11.949 "nvme_admin": false, 00:12:11.949 "nvme_io": false, 00:12:11.949 "nvme_io_md": false, 00:12:11.949 "write_zeroes": true, 00:12:11.949 "zcopy": true, 00:12:11.949 "get_zone_info": false, 00:12:11.949 "zone_management": false, 00:12:11.949 "zone_append": false, 00:12:11.949 "compare": false, 00:12:11.949 "compare_and_write": false, 00:12:11.949 "abort": true, 00:12:11.949 "seek_hole": false, 00:12:11.949 "seek_data": false, 00:12:11.949 "copy": true, 00:12:11.949 "nvme_iov_md": false 00:12:11.949 }, 00:12:11.949 "memory_domains": [ 00:12:11.949 { 00:12:11.949 "dma_device_id": "system", 00:12:11.949 "dma_device_type": 1 00:12:11.949 }, 00:12:11.949 { 00:12:11.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.949 "dma_device_type": 2 00:12:11.949 } 00:12:11.949 ], 00:12:11.949 "driver_specific": {} 00:12:11.949 } 00:12:11.949 ] 00:12:11.949 12:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.949 12:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:11.949 12:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:11.949 12:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:11.949 12:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:12:11.949 12:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:11.949 12:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:11.949 12:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:11.949 12:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:11.949 12:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:11.949 12:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.949 12:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.949 12:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.949 12:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.949 12:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.949 12:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:11.949 12:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.949 12:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.949 12:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.949 12:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.949 "name": "Existed_Raid", 00:12:11.949 "uuid": "62eee8a5-142f-45ed-bf93-6a1453321292", 00:12:11.949 "strip_size_kb": 64, 00:12:11.949 "state": "online", 00:12:11.949 "raid_level": "concat", 00:12:11.949 "superblock": false, 00:12:11.949 "num_base_bdevs": 2, 00:12:11.949 "num_base_bdevs_discovered": 2, 00:12:11.949 "num_base_bdevs_operational": 2, 00:12:11.949 "base_bdevs_list": [ 00:12:11.949 { 00:12:11.949 "name": "BaseBdev1", 00:12:11.949 "uuid": "e7256d02-3a78-4c04-a202-cfe08b89c5df", 00:12:11.949 "is_configured": true, 00:12:11.949 "data_offset": 0, 00:12:11.949 "data_size": 65536 00:12:11.949 }, 00:12:11.949 { 00:12:11.949 "name": "BaseBdev2", 00:12:11.949 "uuid": "9fbc8c92-6a4d-46dc-bbcf-6afb81ab6539", 00:12:11.949 "is_configured": true, 00:12:11.949 "data_offset": 0, 00:12:11.949 "data_size": 65536 00:12:11.949 } 00:12:11.949 ] 00:12:11.949 }' 00:12:11.949 12:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.949 12:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.533 12:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:12.533 12:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:12.533 12:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:12.533 12:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:12.533 12:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:12.533 12:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:12.533 12:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:12.533 12:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:12.533 12:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.533 12:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.533 [2024-11-25 12:11:08.357930] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:12.533 12:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.533 12:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:12.533 "name": "Existed_Raid", 00:12:12.533 "aliases": [ 00:12:12.533 "62eee8a5-142f-45ed-bf93-6a1453321292" 00:12:12.533 ], 00:12:12.533 "product_name": "Raid Volume", 00:12:12.533 "block_size": 512, 00:12:12.533 "num_blocks": 131072, 00:12:12.533 "uuid": "62eee8a5-142f-45ed-bf93-6a1453321292", 00:12:12.533 "assigned_rate_limits": { 00:12:12.533 "rw_ios_per_sec": 0, 00:12:12.533 "rw_mbytes_per_sec": 0, 00:12:12.533 "r_mbytes_per_sec": 0, 00:12:12.533 "w_mbytes_per_sec": 0 00:12:12.533 }, 00:12:12.533 "claimed": false, 00:12:12.533 "zoned": false, 00:12:12.533 "supported_io_types": { 00:12:12.533 "read": true, 00:12:12.533 "write": true, 00:12:12.533 "unmap": true, 00:12:12.533 "flush": true, 00:12:12.533 "reset": true, 00:12:12.533 "nvme_admin": false, 00:12:12.533 "nvme_io": false, 00:12:12.533 "nvme_io_md": false, 00:12:12.533 "write_zeroes": true, 00:12:12.533 "zcopy": false, 00:12:12.533 "get_zone_info": false, 00:12:12.533 "zone_management": false, 00:12:12.533 "zone_append": false, 00:12:12.533 "compare": false, 00:12:12.533 "compare_and_write": false, 00:12:12.533 "abort": false, 00:12:12.533 "seek_hole": false, 00:12:12.533 "seek_data": false, 00:12:12.533 "copy": false, 00:12:12.533 "nvme_iov_md": false 00:12:12.533 }, 00:12:12.533 "memory_domains": [ 00:12:12.533 { 00:12:12.533 "dma_device_id": "system", 00:12:12.533 "dma_device_type": 1 00:12:12.533 }, 00:12:12.533 { 00:12:12.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.533 "dma_device_type": 2 00:12:12.533 }, 00:12:12.533 { 00:12:12.533 "dma_device_id": "system", 00:12:12.533 "dma_device_type": 1 00:12:12.533 }, 00:12:12.533 { 00:12:12.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.533 "dma_device_type": 2 00:12:12.533 } 00:12:12.533 ], 00:12:12.533 "driver_specific": { 00:12:12.533 "raid": { 00:12:12.533 "uuid": "62eee8a5-142f-45ed-bf93-6a1453321292", 00:12:12.533 "strip_size_kb": 64, 00:12:12.533 "state": "online", 00:12:12.533 "raid_level": "concat", 00:12:12.533 "superblock": false, 00:12:12.533 "num_base_bdevs": 2, 00:12:12.533 "num_base_bdevs_discovered": 2, 00:12:12.533 "num_base_bdevs_operational": 2, 00:12:12.533 "base_bdevs_list": [ 00:12:12.533 { 00:12:12.533 "name": "BaseBdev1", 00:12:12.533 "uuid": "e7256d02-3a78-4c04-a202-cfe08b89c5df", 00:12:12.533 "is_configured": true, 00:12:12.533 "data_offset": 0, 00:12:12.533 "data_size": 65536 00:12:12.533 }, 00:12:12.533 { 00:12:12.533 "name": "BaseBdev2", 00:12:12.533 "uuid": "9fbc8c92-6a4d-46dc-bbcf-6afb81ab6539", 00:12:12.533 "is_configured": true, 00:12:12.533 "data_offset": 0, 00:12:12.533 "data_size": 65536 00:12:12.533 } 00:12:12.533 ] 00:12:12.533 } 00:12:12.533 } 00:12:12.533 }' 00:12:12.533 12:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:12.533 12:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:12.533 BaseBdev2' 00:12:12.533 12:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:12.533 12:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:12.533 12:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:12.533 12:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:12.533 12:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:12.533 12:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.533 12:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.533 12:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.533 12:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:12.533 12:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:12.533 12:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:12.533 12:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:12.533 12:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:12.533 12:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.533 12:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.533 12:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.800 12:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:12.800 12:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:12.800 12:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:12.801 12:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.801 12:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.801 [2024-11-25 12:11:08.613661] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:12.801 [2024-11-25 12:11:08.613706] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:12.801 [2024-11-25 12:11:08.613772] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:12.801 12:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.801 12:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:12.801 12:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:12:12.801 12:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:12.801 12:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:12.801 12:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:12.801 12:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:12:12.801 12:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:12.801 12:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:12.801 12:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:12.801 12:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:12.801 12:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:12.801 12:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.801 12:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.801 12:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.801 12:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.801 12:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.801 12:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:12.801 12:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.801 12:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.801 12:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.801 12:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.801 "name": "Existed_Raid", 00:12:12.801 "uuid": "62eee8a5-142f-45ed-bf93-6a1453321292", 00:12:12.801 "strip_size_kb": 64, 00:12:12.801 "state": "offline", 00:12:12.801 "raid_level": "concat", 00:12:12.801 "superblock": false, 00:12:12.801 "num_base_bdevs": 2, 00:12:12.801 "num_base_bdevs_discovered": 1, 00:12:12.801 "num_base_bdevs_operational": 1, 00:12:12.801 "base_bdevs_list": [ 00:12:12.801 { 00:12:12.801 "name": null, 00:12:12.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.801 "is_configured": false, 00:12:12.801 "data_offset": 0, 00:12:12.801 "data_size": 65536 00:12:12.801 }, 00:12:12.801 { 00:12:12.801 "name": "BaseBdev2", 00:12:12.801 "uuid": "9fbc8c92-6a4d-46dc-bbcf-6afb81ab6539", 00:12:12.801 "is_configured": true, 00:12:12.801 "data_offset": 0, 00:12:12.801 "data_size": 65536 00:12:12.801 } 00:12:12.801 ] 00:12:12.801 }' 00:12:12.801 12:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.801 12:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.367 12:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:13.367 12:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:13.367 12:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.367 12:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:13.367 12:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.367 12:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.367 12:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.367 12:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:13.367 12:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:13.367 12:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:13.367 12:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.367 12:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.367 [2024-11-25 12:11:09.291360] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:13.367 [2024-11-25 12:11:09.291428] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:13.367 12:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.367 12:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:13.367 12:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:13.367 12:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.367 12:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.367 12:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.367 12:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:13.367 12:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.367 12:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:13.367 12:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:13.367 12:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:12:13.367 12:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61683 00:12:13.367 12:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61683 ']' 00:12:13.367 12:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61683 00:12:13.367 12:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:12:13.367 12:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:13.367 12:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61683 00:12:13.625 killing process with pid 61683 00:12:13.625 12:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:13.625 12:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:13.625 12:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61683' 00:12:13.625 12:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61683 00:12:13.625 [2024-11-25 12:11:09.464802] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:13.625 12:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61683 00:12:13.625 [2024-11-25 12:11:09.479804] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:14.561 12:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:14.561 00:12:14.561 real 0m5.487s 00:12:14.561 user 0m8.276s 00:12:14.561 sys 0m0.792s 00:12:14.561 12:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:14.561 ************************************ 00:12:14.561 END TEST raid_state_function_test 00:12:14.561 12:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.561 ************************************ 00:12:14.561 12:11:10 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:12:14.561 12:11:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:14.561 12:11:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:14.561 12:11:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:14.561 ************************************ 00:12:14.561 START TEST raid_state_function_test_sb 00:12:14.561 ************************************ 00:12:14.561 12:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:12:14.561 12:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:12:14.561 12:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:12:14.561 12:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:14.561 12:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:14.561 12:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:14.561 12:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:14.561 12:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:14.562 12:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:14.562 12:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:14.562 12:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:14.562 12:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:14.562 12:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:14.562 12:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:14.562 12:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:14.562 12:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:14.562 12:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:14.562 12:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:14.562 12:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:14.562 12:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:12:14.562 12:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:14.562 12:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:14.562 12:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:14.562 12:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:14.562 12:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61943 00:12:14.562 12:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61943' 00:12:14.562 Process raid pid: 61943 00:12:14.562 12:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61943 00:12:14.562 12:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:14.562 12:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61943 ']' 00:12:14.562 12:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:14.562 12:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:14.562 12:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:14.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:14.562 12:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:14.562 12:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.821 [2024-11-25 12:11:10.675592] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:12:14.821 [2024-11-25 12:11:10.675772] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:14.821 [2024-11-25 12:11:10.869734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:15.080 [2024-11-25 12:11:11.026565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.340 [2024-11-25 12:11:11.249865] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:15.340 [2024-11-25 12:11:11.249917] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:15.600 12:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:15.600 12:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:15.600 12:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:15.600 12:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.600 12:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.600 [2024-11-25 12:11:11.652254] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:15.600 [2024-11-25 12:11:11.652476] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:15.600 [2024-11-25 12:11:11.652604] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:15.600 [2024-11-25 12:11:11.652667] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:15.600 12:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.600 12:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:12:15.600 12:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:15.600 12:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:15.600 12:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:15.600 12:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:15.600 12:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:15.600 12:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.600 12:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.600 12:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.600 12:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.600 12:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.600 12:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.600 12:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.600 12:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:15.600 12:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.860 12:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.860 "name": "Existed_Raid", 00:12:15.860 "uuid": "db686ceb-180a-485b-b81e-a1a16d36e5c4", 00:12:15.860 "strip_size_kb": 64, 00:12:15.860 "state": "configuring", 00:12:15.860 "raid_level": "concat", 00:12:15.860 "superblock": true, 00:12:15.860 "num_base_bdevs": 2, 00:12:15.860 "num_base_bdevs_discovered": 0, 00:12:15.860 "num_base_bdevs_operational": 2, 00:12:15.860 "base_bdevs_list": [ 00:12:15.860 { 00:12:15.860 "name": "BaseBdev1", 00:12:15.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.860 "is_configured": false, 00:12:15.860 "data_offset": 0, 00:12:15.860 "data_size": 0 00:12:15.860 }, 00:12:15.860 { 00:12:15.860 "name": "BaseBdev2", 00:12:15.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.860 "is_configured": false, 00:12:15.860 "data_offset": 0, 00:12:15.860 "data_size": 0 00:12:15.860 } 00:12:15.860 ] 00:12:15.860 }' 00:12:15.860 12:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.860 12:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.119 12:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:16.119 12:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.119 12:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.119 [2024-11-25 12:11:12.188354] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:16.119 [2024-11-25 12:11:12.188396] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:16.119 12:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.119 12:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:16.119 12:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.119 12:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.119 [2024-11-25 12:11:12.196433] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:16.119 [2024-11-25 12:11:12.196497] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:16.119 [2024-11-25 12:11:12.196515] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:16.119 [2024-11-25 12:11:12.196535] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:16.119 12:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.119 12:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:16.119 12:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.119 12:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.380 [2024-11-25 12:11:12.242584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:16.380 BaseBdev1 00:12:16.380 12:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.380 12:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:16.380 12:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:16.380 12:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:16.380 12:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:16.380 12:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:16.380 12:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:16.380 12:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:16.380 12:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.380 12:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.380 12:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.380 12:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:16.380 12:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.380 12:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.380 [ 00:12:16.380 { 00:12:16.380 "name": "BaseBdev1", 00:12:16.380 "aliases": [ 00:12:16.380 "0da41a53-d9bb-41a9-8a1a-237637d71cc1" 00:12:16.380 ], 00:12:16.380 "product_name": "Malloc disk", 00:12:16.380 "block_size": 512, 00:12:16.380 "num_blocks": 65536, 00:12:16.380 "uuid": "0da41a53-d9bb-41a9-8a1a-237637d71cc1", 00:12:16.380 "assigned_rate_limits": { 00:12:16.380 "rw_ios_per_sec": 0, 00:12:16.380 "rw_mbytes_per_sec": 0, 00:12:16.380 "r_mbytes_per_sec": 0, 00:12:16.380 "w_mbytes_per_sec": 0 00:12:16.380 }, 00:12:16.380 "claimed": true, 00:12:16.380 "claim_type": "exclusive_write", 00:12:16.380 "zoned": false, 00:12:16.380 "supported_io_types": { 00:12:16.380 "read": true, 00:12:16.380 "write": true, 00:12:16.380 "unmap": true, 00:12:16.380 "flush": true, 00:12:16.380 "reset": true, 00:12:16.380 "nvme_admin": false, 00:12:16.380 "nvme_io": false, 00:12:16.380 "nvme_io_md": false, 00:12:16.380 "write_zeroes": true, 00:12:16.380 "zcopy": true, 00:12:16.380 "get_zone_info": false, 00:12:16.380 "zone_management": false, 00:12:16.380 "zone_append": false, 00:12:16.380 "compare": false, 00:12:16.380 "compare_and_write": false, 00:12:16.380 "abort": true, 00:12:16.380 "seek_hole": false, 00:12:16.380 "seek_data": false, 00:12:16.380 "copy": true, 00:12:16.380 "nvme_iov_md": false 00:12:16.380 }, 00:12:16.380 "memory_domains": [ 00:12:16.381 { 00:12:16.381 "dma_device_id": "system", 00:12:16.381 "dma_device_type": 1 00:12:16.381 }, 00:12:16.381 { 00:12:16.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.381 "dma_device_type": 2 00:12:16.381 } 00:12:16.381 ], 00:12:16.381 "driver_specific": {} 00:12:16.381 } 00:12:16.381 ] 00:12:16.381 12:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.381 12:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:16.381 12:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:12:16.381 12:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:16.381 12:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:16.381 12:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:16.381 12:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:16.381 12:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:16.381 12:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.381 12:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.381 12:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.381 12:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.381 12:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.381 12:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.381 12:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.381 12:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:16.381 12:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.381 12:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.381 "name": "Existed_Raid", 00:12:16.381 "uuid": "bb07e105-261d-41ab-8e93-227ee6578f5f", 00:12:16.381 "strip_size_kb": 64, 00:12:16.381 "state": "configuring", 00:12:16.381 "raid_level": "concat", 00:12:16.381 "superblock": true, 00:12:16.381 "num_base_bdevs": 2, 00:12:16.381 "num_base_bdevs_discovered": 1, 00:12:16.381 "num_base_bdevs_operational": 2, 00:12:16.381 "base_bdevs_list": [ 00:12:16.381 { 00:12:16.381 "name": "BaseBdev1", 00:12:16.381 "uuid": "0da41a53-d9bb-41a9-8a1a-237637d71cc1", 00:12:16.381 "is_configured": true, 00:12:16.381 "data_offset": 2048, 00:12:16.381 "data_size": 63488 00:12:16.381 }, 00:12:16.381 { 00:12:16.381 "name": "BaseBdev2", 00:12:16.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.381 "is_configured": false, 00:12:16.381 "data_offset": 0, 00:12:16.381 "data_size": 0 00:12:16.381 } 00:12:16.381 ] 00:12:16.381 }' 00:12:16.381 12:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.381 12:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.949 12:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:16.949 12:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.949 12:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.949 [2024-11-25 12:11:12.782804] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:16.949 [2024-11-25 12:11:12.782995] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:16.949 12:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.949 12:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:16.949 12:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.949 12:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.949 [2024-11-25 12:11:12.790836] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:16.949 [2024-11-25 12:11:12.793471] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:16.949 [2024-11-25 12:11:12.793676] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:16.949 12:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.949 12:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:16.949 12:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:16.949 12:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:12:16.949 12:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:16.949 12:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:16.949 12:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:16.949 12:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:16.949 12:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:16.949 12:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.949 12:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.949 12:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.949 12:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.949 12:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.949 12:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.949 12:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.949 12:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:16.949 12:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.949 12:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.949 "name": "Existed_Raid", 00:12:16.949 "uuid": "a3e0302f-3b93-4e4b-baa8-9b9164fedb37", 00:12:16.949 "strip_size_kb": 64, 00:12:16.949 "state": "configuring", 00:12:16.949 "raid_level": "concat", 00:12:16.949 "superblock": true, 00:12:16.949 "num_base_bdevs": 2, 00:12:16.949 "num_base_bdevs_discovered": 1, 00:12:16.949 "num_base_bdevs_operational": 2, 00:12:16.949 "base_bdevs_list": [ 00:12:16.949 { 00:12:16.949 "name": "BaseBdev1", 00:12:16.949 "uuid": "0da41a53-d9bb-41a9-8a1a-237637d71cc1", 00:12:16.949 "is_configured": true, 00:12:16.949 "data_offset": 2048, 00:12:16.949 "data_size": 63488 00:12:16.949 }, 00:12:16.949 { 00:12:16.949 "name": "BaseBdev2", 00:12:16.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.949 "is_configured": false, 00:12:16.949 "data_offset": 0, 00:12:16.949 "data_size": 0 00:12:16.949 } 00:12:16.949 ] 00:12:16.949 }' 00:12:16.949 12:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.949 12:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.517 12:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:17.518 12:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.518 12:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.518 BaseBdev2 00:12:17.518 [2024-11-25 12:11:13.345528] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:17.518 [2024-11-25 12:11:13.345848] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:17.518 [2024-11-25 12:11:13.345868] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:17.518 [2024-11-25 12:11:13.346198] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:17.518 [2024-11-25 12:11:13.346429] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:17.518 [2024-11-25 12:11:13.346452] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:17.518 [2024-11-25 12:11:13.346621] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:17.518 12:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.518 12:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:17.518 12:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:17.518 12:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:17.518 12:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:17.518 12:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:17.518 12:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:17.518 12:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:17.518 12:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.518 12:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.518 12:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.518 12:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:17.518 12:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.518 12:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.518 [ 00:12:17.518 { 00:12:17.518 "name": "BaseBdev2", 00:12:17.518 "aliases": [ 00:12:17.518 "4f778734-e852-4087-9314-2161463972d7" 00:12:17.518 ], 00:12:17.518 "product_name": "Malloc disk", 00:12:17.518 "block_size": 512, 00:12:17.518 "num_blocks": 65536, 00:12:17.518 "uuid": "4f778734-e852-4087-9314-2161463972d7", 00:12:17.518 "assigned_rate_limits": { 00:12:17.518 "rw_ios_per_sec": 0, 00:12:17.518 "rw_mbytes_per_sec": 0, 00:12:17.518 "r_mbytes_per_sec": 0, 00:12:17.518 "w_mbytes_per_sec": 0 00:12:17.518 }, 00:12:17.518 "claimed": true, 00:12:17.518 "claim_type": "exclusive_write", 00:12:17.518 "zoned": false, 00:12:17.518 "supported_io_types": { 00:12:17.518 "read": true, 00:12:17.518 "write": true, 00:12:17.518 "unmap": true, 00:12:17.518 "flush": true, 00:12:17.518 "reset": true, 00:12:17.518 "nvme_admin": false, 00:12:17.518 "nvme_io": false, 00:12:17.518 "nvme_io_md": false, 00:12:17.518 "write_zeroes": true, 00:12:17.518 "zcopy": true, 00:12:17.518 "get_zone_info": false, 00:12:17.518 "zone_management": false, 00:12:17.518 "zone_append": false, 00:12:17.518 "compare": false, 00:12:17.518 "compare_and_write": false, 00:12:17.518 "abort": true, 00:12:17.518 "seek_hole": false, 00:12:17.518 "seek_data": false, 00:12:17.518 "copy": true, 00:12:17.518 "nvme_iov_md": false 00:12:17.518 }, 00:12:17.518 "memory_domains": [ 00:12:17.518 { 00:12:17.518 "dma_device_id": "system", 00:12:17.518 "dma_device_type": 1 00:12:17.518 }, 00:12:17.518 { 00:12:17.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.518 "dma_device_type": 2 00:12:17.518 } 00:12:17.518 ], 00:12:17.518 "driver_specific": {} 00:12:17.518 } 00:12:17.518 ] 00:12:17.518 12:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.518 12:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:17.518 12:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:17.518 12:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:17.518 12:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:12:17.518 12:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:17.518 12:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:17.518 12:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:17.518 12:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:17.518 12:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:17.518 12:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.518 12:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.518 12:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.518 12:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.518 12:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:17.518 12:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.518 12:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.518 12:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.518 12:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.518 12:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.518 "name": "Existed_Raid", 00:12:17.518 "uuid": "a3e0302f-3b93-4e4b-baa8-9b9164fedb37", 00:12:17.518 "strip_size_kb": 64, 00:12:17.518 "state": "online", 00:12:17.518 "raid_level": "concat", 00:12:17.518 "superblock": true, 00:12:17.518 "num_base_bdevs": 2, 00:12:17.518 "num_base_bdevs_discovered": 2, 00:12:17.518 "num_base_bdevs_operational": 2, 00:12:17.518 "base_bdevs_list": [ 00:12:17.518 { 00:12:17.518 "name": "BaseBdev1", 00:12:17.518 "uuid": "0da41a53-d9bb-41a9-8a1a-237637d71cc1", 00:12:17.518 "is_configured": true, 00:12:17.518 "data_offset": 2048, 00:12:17.518 "data_size": 63488 00:12:17.518 }, 00:12:17.518 { 00:12:17.518 "name": "BaseBdev2", 00:12:17.518 "uuid": "4f778734-e852-4087-9314-2161463972d7", 00:12:17.518 "is_configured": true, 00:12:17.518 "data_offset": 2048, 00:12:17.518 "data_size": 63488 00:12:17.518 } 00:12:17.518 ] 00:12:17.518 }' 00:12:17.518 12:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.518 12:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.086 12:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:18.087 12:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:18.087 12:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:18.087 12:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:18.087 12:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:18.087 12:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:18.087 12:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:18.087 12:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.087 12:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:18.087 12:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.087 [2024-11-25 12:11:13.914074] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:18.087 12:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.087 12:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:18.087 "name": "Existed_Raid", 00:12:18.087 "aliases": [ 00:12:18.087 "a3e0302f-3b93-4e4b-baa8-9b9164fedb37" 00:12:18.087 ], 00:12:18.087 "product_name": "Raid Volume", 00:12:18.087 "block_size": 512, 00:12:18.087 "num_blocks": 126976, 00:12:18.087 "uuid": "a3e0302f-3b93-4e4b-baa8-9b9164fedb37", 00:12:18.087 "assigned_rate_limits": { 00:12:18.087 "rw_ios_per_sec": 0, 00:12:18.087 "rw_mbytes_per_sec": 0, 00:12:18.087 "r_mbytes_per_sec": 0, 00:12:18.087 "w_mbytes_per_sec": 0 00:12:18.087 }, 00:12:18.087 "claimed": false, 00:12:18.087 "zoned": false, 00:12:18.087 "supported_io_types": { 00:12:18.087 "read": true, 00:12:18.087 "write": true, 00:12:18.087 "unmap": true, 00:12:18.087 "flush": true, 00:12:18.087 "reset": true, 00:12:18.087 "nvme_admin": false, 00:12:18.087 "nvme_io": false, 00:12:18.087 "nvme_io_md": false, 00:12:18.087 "write_zeroes": true, 00:12:18.087 "zcopy": false, 00:12:18.087 "get_zone_info": false, 00:12:18.087 "zone_management": false, 00:12:18.087 "zone_append": false, 00:12:18.087 "compare": false, 00:12:18.087 "compare_and_write": false, 00:12:18.087 "abort": false, 00:12:18.087 "seek_hole": false, 00:12:18.087 "seek_data": false, 00:12:18.087 "copy": false, 00:12:18.087 "nvme_iov_md": false 00:12:18.087 }, 00:12:18.087 "memory_domains": [ 00:12:18.087 { 00:12:18.087 "dma_device_id": "system", 00:12:18.087 "dma_device_type": 1 00:12:18.087 }, 00:12:18.087 { 00:12:18.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.087 "dma_device_type": 2 00:12:18.087 }, 00:12:18.087 { 00:12:18.087 "dma_device_id": "system", 00:12:18.087 "dma_device_type": 1 00:12:18.087 }, 00:12:18.087 { 00:12:18.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.087 "dma_device_type": 2 00:12:18.087 } 00:12:18.087 ], 00:12:18.087 "driver_specific": { 00:12:18.087 "raid": { 00:12:18.087 "uuid": "a3e0302f-3b93-4e4b-baa8-9b9164fedb37", 00:12:18.087 "strip_size_kb": 64, 00:12:18.087 "state": "online", 00:12:18.087 "raid_level": "concat", 00:12:18.087 "superblock": true, 00:12:18.087 "num_base_bdevs": 2, 00:12:18.087 "num_base_bdevs_discovered": 2, 00:12:18.087 "num_base_bdevs_operational": 2, 00:12:18.087 "base_bdevs_list": [ 00:12:18.087 { 00:12:18.087 "name": "BaseBdev1", 00:12:18.087 "uuid": "0da41a53-d9bb-41a9-8a1a-237637d71cc1", 00:12:18.087 "is_configured": true, 00:12:18.087 "data_offset": 2048, 00:12:18.087 "data_size": 63488 00:12:18.087 }, 00:12:18.087 { 00:12:18.087 "name": "BaseBdev2", 00:12:18.087 "uuid": "4f778734-e852-4087-9314-2161463972d7", 00:12:18.087 "is_configured": true, 00:12:18.087 "data_offset": 2048, 00:12:18.087 "data_size": 63488 00:12:18.087 } 00:12:18.087 ] 00:12:18.087 } 00:12:18.087 } 00:12:18.087 }' 00:12:18.087 12:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:18.087 12:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:18.087 BaseBdev2' 00:12:18.087 12:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:18.087 12:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:18.087 12:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:18.087 12:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:18.087 12:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.087 12:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:18.087 12:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.087 12:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.087 12:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:18.087 12:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:18.087 12:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:18.087 12:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:18.087 12:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.087 12:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.087 12:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:18.087 12:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.087 12:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:18.087 12:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:18.087 12:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:18.087 12:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.087 12:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.345 [2024-11-25 12:11:14.177841] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:18.345 [2024-11-25 12:11:14.178013] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:18.345 [2024-11-25 12:11:14.178183] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:18.345 12:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.345 12:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:18.345 12:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:12:18.345 12:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:18.345 12:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:12:18.345 12:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:18.345 12:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:12:18.345 12:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:18.345 12:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:18.345 12:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:18.345 12:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:18.345 12:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:18.345 12:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.345 12:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.345 12:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.345 12:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.345 12:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.345 12:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:18.345 12:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.345 12:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.346 12:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.346 12:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.346 "name": "Existed_Raid", 00:12:18.346 "uuid": "a3e0302f-3b93-4e4b-baa8-9b9164fedb37", 00:12:18.346 "strip_size_kb": 64, 00:12:18.346 "state": "offline", 00:12:18.346 "raid_level": "concat", 00:12:18.346 "superblock": true, 00:12:18.346 "num_base_bdevs": 2, 00:12:18.346 "num_base_bdevs_discovered": 1, 00:12:18.346 "num_base_bdevs_operational": 1, 00:12:18.346 "base_bdevs_list": [ 00:12:18.346 { 00:12:18.346 "name": null, 00:12:18.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.346 "is_configured": false, 00:12:18.346 "data_offset": 0, 00:12:18.346 "data_size": 63488 00:12:18.346 }, 00:12:18.346 { 00:12:18.346 "name": "BaseBdev2", 00:12:18.346 "uuid": "4f778734-e852-4087-9314-2161463972d7", 00:12:18.346 "is_configured": true, 00:12:18.346 "data_offset": 2048, 00:12:18.346 "data_size": 63488 00:12:18.346 } 00:12:18.346 ] 00:12:18.346 }' 00:12:18.346 12:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.346 12:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.912 12:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:18.912 12:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:18.912 12:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.912 12:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:18.912 12:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.912 12:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.912 12:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.912 12:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:18.912 12:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:18.912 12:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:18.912 12:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.912 12:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.912 [2024-11-25 12:11:14.828543] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:18.912 [2024-11-25 12:11:14.828739] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:18.912 12:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.912 12:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:18.912 12:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:18.912 12:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.912 12:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.912 12:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.912 12:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:18.912 12:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.912 12:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:18.912 12:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:18.912 12:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:12:18.912 12:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61943 00:12:18.912 12:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61943 ']' 00:12:18.912 12:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61943 00:12:18.912 12:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:18.912 12:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:18.912 12:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61943 00:12:19.170 killing process with pid 61943 00:12:19.170 12:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:19.170 12:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:19.170 12:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61943' 00:12:19.170 12:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61943 00:12:19.170 [2024-11-25 12:11:15.008097] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:19.170 12:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61943 00:12:19.170 [2024-11-25 12:11:15.022677] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:20.103 ************************************ 00:12:20.103 END TEST raid_state_function_test_sb 00:12:20.104 12:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:20.104 00:12:20.104 real 0m5.482s 00:12:20.104 user 0m8.264s 00:12:20.104 sys 0m0.760s 00:12:20.104 12:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:20.104 12:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.104 ************************************ 00:12:20.104 12:11:16 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:12:20.104 12:11:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:20.104 12:11:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:20.104 12:11:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:20.104 ************************************ 00:12:20.104 START TEST raid_superblock_test 00:12:20.104 ************************************ 00:12:20.104 12:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:12:20.104 12:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:12:20.104 12:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:12:20.104 12:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:20.104 12:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:20.104 12:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:20.104 12:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:20.104 12:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:20.104 12:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:20.104 12:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:20.104 12:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:20.104 12:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:20.104 12:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:20.104 12:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:20.104 12:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:12:20.104 12:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:12:20.104 12:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:12:20.104 12:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62195 00:12:20.104 12:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:20.104 12:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62195 00:12:20.104 12:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62195 ']' 00:12:20.104 12:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:20.104 12:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:20.104 12:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:20.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:20.104 12:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:20.104 12:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.363 [2024-11-25 12:11:16.215024] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:12:20.363 [2024-11-25 12:11:16.215407] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62195 ] 00:12:20.363 [2024-11-25 12:11:16.401013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:20.621 [2024-11-25 12:11:16.535278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.880 [2024-11-25 12:11:16.749210] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:20.880 [2024-11-25 12:11:16.749280] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:21.449 12:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:21.449 12:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:21.449 12:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:21.449 12:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:21.449 12:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:21.449 12:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:21.449 12:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:21.449 12:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:21.449 12:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:21.449 12:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:21.449 12:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:21.449 12:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.449 12:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.449 malloc1 00:12:21.449 12:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.449 12:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:21.449 12:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.449 12:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.449 [2024-11-25 12:11:17.294689] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:21.449 [2024-11-25 12:11:17.294773] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:21.449 [2024-11-25 12:11:17.294807] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:21.449 [2024-11-25 12:11:17.294823] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:21.449 [2024-11-25 12:11:17.297600] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:21.449 [2024-11-25 12:11:17.297644] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:21.449 pt1 00:12:21.449 12:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.449 12:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:21.449 12:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:21.449 12:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:21.449 12:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:21.449 12:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:21.449 12:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:21.449 12:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:21.449 12:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:21.449 12:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:21.449 12:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.449 12:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.449 malloc2 00:12:21.449 12:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.449 12:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:21.449 12:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.449 12:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.449 [2024-11-25 12:11:17.351175] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:21.449 [2024-11-25 12:11:17.351243] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:21.449 [2024-11-25 12:11:17.351276] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:21.449 [2024-11-25 12:11:17.351291] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:21.449 [2024-11-25 12:11:17.354113] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:21.449 [2024-11-25 12:11:17.354157] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:21.449 pt2 00:12:21.450 12:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.450 12:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:21.450 12:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:21.450 12:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:12:21.450 12:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.450 12:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.450 [2024-11-25 12:11:17.359258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:21.450 [2024-11-25 12:11:17.361798] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:21.450 [2024-11-25 12:11:17.362027] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:21.450 [2024-11-25 12:11:17.362046] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:21.450 [2024-11-25 12:11:17.362380] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:21.450 [2024-11-25 12:11:17.362581] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:21.450 [2024-11-25 12:11:17.362608] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:21.450 [2024-11-25 12:11:17.362797] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:21.450 12:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.450 12:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:12:21.450 12:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:21.450 12:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:21.450 12:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:21.450 12:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:21.450 12:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:21.450 12:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.450 12:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.450 12:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.450 12:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.450 12:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.450 12:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.450 12:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.450 12:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.450 12:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.450 12:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.450 "name": "raid_bdev1", 00:12:21.450 "uuid": "aa60781b-6f9d-4231-9702-05a642bf4d8b", 00:12:21.450 "strip_size_kb": 64, 00:12:21.450 "state": "online", 00:12:21.450 "raid_level": "concat", 00:12:21.450 "superblock": true, 00:12:21.450 "num_base_bdevs": 2, 00:12:21.450 "num_base_bdevs_discovered": 2, 00:12:21.450 "num_base_bdevs_operational": 2, 00:12:21.450 "base_bdevs_list": [ 00:12:21.450 { 00:12:21.450 "name": "pt1", 00:12:21.450 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:21.450 "is_configured": true, 00:12:21.450 "data_offset": 2048, 00:12:21.450 "data_size": 63488 00:12:21.450 }, 00:12:21.450 { 00:12:21.450 "name": "pt2", 00:12:21.450 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:21.450 "is_configured": true, 00:12:21.450 "data_offset": 2048, 00:12:21.450 "data_size": 63488 00:12:21.450 } 00:12:21.450 ] 00:12:21.450 }' 00:12:21.450 12:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.450 12:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.018 12:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:22.018 12:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:22.018 12:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:22.018 12:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:22.018 12:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:22.018 12:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:22.018 12:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:22.018 12:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.018 12:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.018 12:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:22.018 [2024-11-25 12:11:17.875717] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:22.018 12:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.018 12:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:22.018 "name": "raid_bdev1", 00:12:22.018 "aliases": [ 00:12:22.018 "aa60781b-6f9d-4231-9702-05a642bf4d8b" 00:12:22.018 ], 00:12:22.018 "product_name": "Raid Volume", 00:12:22.018 "block_size": 512, 00:12:22.018 "num_blocks": 126976, 00:12:22.018 "uuid": "aa60781b-6f9d-4231-9702-05a642bf4d8b", 00:12:22.018 "assigned_rate_limits": { 00:12:22.018 "rw_ios_per_sec": 0, 00:12:22.018 "rw_mbytes_per_sec": 0, 00:12:22.018 "r_mbytes_per_sec": 0, 00:12:22.018 "w_mbytes_per_sec": 0 00:12:22.018 }, 00:12:22.018 "claimed": false, 00:12:22.018 "zoned": false, 00:12:22.018 "supported_io_types": { 00:12:22.018 "read": true, 00:12:22.018 "write": true, 00:12:22.018 "unmap": true, 00:12:22.018 "flush": true, 00:12:22.018 "reset": true, 00:12:22.018 "nvme_admin": false, 00:12:22.018 "nvme_io": false, 00:12:22.018 "nvme_io_md": false, 00:12:22.018 "write_zeroes": true, 00:12:22.018 "zcopy": false, 00:12:22.018 "get_zone_info": false, 00:12:22.018 "zone_management": false, 00:12:22.018 "zone_append": false, 00:12:22.018 "compare": false, 00:12:22.018 "compare_and_write": false, 00:12:22.018 "abort": false, 00:12:22.018 "seek_hole": false, 00:12:22.018 "seek_data": false, 00:12:22.018 "copy": false, 00:12:22.018 "nvme_iov_md": false 00:12:22.018 }, 00:12:22.018 "memory_domains": [ 00:12:22.018 { 00:12:22.018 "dma_device_id": "system", 00:12:22.018 "dma_device_type": 1 00:12:22.018 }, 00:12:22.018 { 00:12:22.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.018 "dma_device_type": 2 00:12:22.018 }, 00:12:22.018 { 00:12:22.018 "dma_device_id": "system", 00:12:22.018 "dma_device_type": 1 00:12:22.018 }, 00:12:22.018 { 00:12:22.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.018 "dma_device_type": 2 00:12:22.018 } 00:12:22.018 ], 00:12:22.018 "driver_specific": { 00:12:22.018 "raid": { 00:12:22.018 "uuid": "aa60781b-6f9d-4231-9702-05a642bf4d8b", 00:12:22.018 "strip_size_kb": 64, 00:12:22.018 "state": "online", 00:12:22.018 "raid_level": "concat", 00:12:22.018 "superblock": true, 00:12:22.018 "num_base_bdevs": 2, 00:12:22.018 "num_base_bdevs_discovered": 2, 00:12:22.018 "num_base_bdevs_operational": 2, 00:12:22.018 "base_bdevs_list": [ 00:12:22.018 { 00:12:22.018 "name": "pt1", 00:12:22.018 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:22.018 "is_configured": true, 00:12:22.018 "data_offset": 2048, 00:12:22.018 "data_size": 63488 00:12:22.018 }, 00:12:22.018 { 00:12:22.018 "name": "pt2", 00:12:22.018 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:22.018 "is_configured": true, 00:12:22.018 "data_offset": 2048, 00:12:22.018 "data_size": 63488 00:12:22.018 } 00:12:22.018 ] 00:12:22.018 } 00:12:22.018 } 00:12:22.018 }' 00:12:22.018 12:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:22.018 12:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:22.018 pt2' 00:12:22.018 12:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.018 12:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:22.018 12:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:22.018 12:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:22.018 12:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.018 12:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.018 12:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.018 12:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.018 12:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:22.018 12:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:22.018 12:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:22.018 12:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:22.018 12:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.018 12:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.018 12:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.018 12:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.278 12:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:22.278 12:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:22.278 12:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:22.278 12:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.278 12:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:22.278 12:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.278 [2024-11-25 12:11:18.139727] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:22.278 12:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.278 12:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=aa60781b-6f9d-4231-9702-05a642bf4d8b 00:12:22.278 12:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z aa60781b-6f9d-4231-9702-05a642bf4d8b ']' 00:12:22.278 12:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:22.278 12:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.278 12:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.278 [2024-11-25 12:11:18.183365] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:22.278 [2024-11-25 12:11:18.183411] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:22.278 [2024-11-25 12:11:18.183514] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:22.278 [2024-11-25 12:11:18.183591] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:22.278 [2024-11-25 12:11:18.183613] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:22.278 12:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.278 12:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.278 12:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.278 12:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:22.278 12:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.278 12:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.278 12:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:22.278 12:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:22.278 12:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:22.278 12:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:22.278 12:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.278 12:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.278 12:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.278 12:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:22.278 12:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:22.278 12:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.278 12:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.278 12:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.278 12:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:22.278 12:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:22.278 12:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.278 12:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.278 12:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.278 12:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:22.278 12:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:12:22.278 12:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:22.278 12:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:12:22.278 12:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:22.278 12:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:22.278 12:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:22.278 12:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:22.278 12:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:12:22.278 12:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.278 12:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.278 [2024-11-25 12:11:18.323468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:22.278 [2024-11-25 12:11:18.325904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:22.278 [2024-11-25 12:11:18.325994] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:22.278 [2024-11-25 12:11:18.326078] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:22.278 [2024-11-25 12:11:18.326114] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:22.278 [2024-11-25 12:11:18.326130] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:22.278 request: 00:12:22.278 { 00:12:22.278 "name": "raid_bdev1", 00:12:22.278 "raid_level": "concat", 00:12:22.278 "base_bdevs": [ 00:12:22.278 "malloc1", 00:12:22.278 "malloc2" 00:12:22.278 ], 00:12:22.278 "strip_size_kb": 64, 00:12:22.278 "superblock": false, 00:12:22.278 "method": "bdev_raid_create", 00:12:22.278 "req_id": 1 00:12:22.278 } 00:12:22.278 Got JSON-RPC error response 00:12:22.278 response: 00:12:22.278 { 00:12:22.278 "code": -17, 00:12:22.278 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:22.278 } 00:12:22.278 12:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:22.278 12:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:22.278 12:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:22.278 12:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:22.278 12:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:22.278 12:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.278 12:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:22.278 12:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.278 12:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.278 12:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.538 12:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:22.538 12:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:22.538 12:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:22.538 12:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.538 12:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.538 [2024-11-25 12:11:18.387489] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:22.538 [2024-11-25 12:11:18.387575] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:22.538 [2024-11-25 12:11:18.387603] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:22.538 [2024-11-25 12:11:18.387622] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:22.538 [2024-11-25 12:11:18.390506] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:22.538 [2024-11-25 12:11:18.390556] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:22.538 [2024-11-25 12:11:18.390660] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:22.538 [2024-11-25 12:11:18.390739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:22.538 pt1 00:12:22.538 12:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.538 12:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:12:22.538 12:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:22.538 12:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:22.538 12:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:22.538 12:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:22.538 12:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:22.538 12:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.538 12:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.538 12:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.538 12:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.538 12:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:22.538 12:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.538 12:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.538 12:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.538 12:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.538 12:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.538 "name": "raid_bdev1", 00:12:22.538 "uuid": "aa60781b-6f9d-4231-9702-05a642bf4d8b", 00:12:22.538 "strip_size_kb": 64, 00:12:22.538 "state": "configuring", 00:12:22.538 "raid_level": "concat", 00:12:22.538 "superblock": true, 00:12:22.538 "num_base_bdevs": 2, 00:12:22.538 "num_base_bdevs_discovered": 1, 00:12:22.538 "num_base_bdevs_operational": 2, 00:12:22.538 "base_bdevs_list": [ 00:12:22.538 { 00:12:22.538 "name": "pt1", 00:12:22.538 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:22.538 "is_configured": true, 00:12:22.538 "data_offset": 2048, 00:12:22.538 "data_size": 63488 00:12:22.538 }, 00:12:22.538 { 00:12:22.538 "name": null, 00:12:22.538 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:22.538 "is_configured": false, 00:12:22.538 "data_offset": 2048, 00:12:22.538 "data_size": 63488 00:12:22.538 } 00:12:22.538 ] 00:12:22.538 }' 00:12:22.538 12:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.538 12:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.797 12:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:12:22.797 12:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:22.797 12:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:22.797 12:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:22.797 12:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.797 12:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.797 [2024-11-25 12:11:18.879681] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:22.797 [2024-11-25 12:11:18.879767] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:22.797 [2024-11-25 12:11:18.879799] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:22.797 [2024-11-25 12:11:18.879817] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:22.797 [2024-11-25 12:11:18.880388] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:22.797 [2024-11-25 12:11:18.880429] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:22.797 [2024-11-25 12:11:18.880538] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:22.797 [2024-11-25 12:11:18.880575] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:22.797 [2024-11-25 12:11:18.880715] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:22.797 [2024-11-25 12:11:18.880736] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:22.797 [2024-11-25 12:11:18.881030] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:22.797 [2024-11-25 12:11:18.881212] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:22.797 [2024-11-25 12:11:18.881227] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:22.797 [2024-11-25 12:11:18.881412] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:22.797 pt2 00:12:22.797 12:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.797 12:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:22.797 12:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:22.797 12:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:12:22.797 12:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:22.797 12:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:22.797 12:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:22.797 12:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:22.797 12:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:22.797 12:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.797 12:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.797 12:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.797 12:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.057 12:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.057 12:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.057 12:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.057 12:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.057 12:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.057 12:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.057 "name": "raid_bdev1", 00:12:23.057 "uuid": "aa60781b-6f9d-4231-9702-05a642bf4d8b", 00:12:23.057 "strip_size_kb": 64, 00:12:23.057 "state": "online", 00:12:23.057 "raid_level": "concat", 00:12:23.057 "superblock": true, 00:12:23.057 "num_base_bdevs": 2, 00:12:23.057 "num_base_bdevs_discovered": 2, 00:12:23.057 "num_base_bdevs_operational": 2, 00:12:23.057 "base_bdevs_list": [ 00:12:23.057 { 00:12:23.057 "name": "pt1", 00:12:23.057 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:23.057 "is_configured": true, 00:12:23.057 "data_offset": 2048, 00:12:23.057 "data_size": 63488 00:12:23.057 }, 00:12:23.057 { 00:12:23.057 "name": "pt2", 00:12:23.057 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:23.057 "is_configured": true, 00:12:23.057 "data_offset": 2048, 00:12:23.057 "data_size": 63488 00:12:23.057 } 00:12:23.057 ] 00:12:23.057 }' 00:12:23.057 12:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.057 12:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.315 12:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:23.315 12:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:23.315 12:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:23.315 12:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:23.315 12:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:23.315 12:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:23.315 12:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:23.315 12:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:23.315 12:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.315 12:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.574 [2024-11-25 12:11:19.408125] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:23.574 12:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.574 12:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:23.574 "name": "raid_bdev1", 00:12:23.574 "aliases": [ 00:12:23.574 "aa60781b-6f9d-4231-9702-05a642bf4d8b" 00:12:23.574 ], 00:12:23.574 "product_name": "Raid Volume", 00:12:23.574 "block_size": 512, 00:12:23.574 "num_blocks": 126976, 00:12:23.574 "uuid": "aa60781b-6f9d-4231-9702-05a642bf4d8b", 00:12:23.574 "assigned_rate_limits": { 00:12:23.574 "rw_ios_per_sec": 0, 00:12:23.574 "rw_mbytes_per_sec": 0, 00:12:23.574 "r_mbytes_per_sec": 0, 00:12:23.574 "w_mbytes_per_sec": 0 00:12:23.574 }, 00:12:23.574 "claimed": false, 00:12:23.574 "zoned": false, 00:12:23.574 "supported_io_types": { 00:12:23.574 "read": true, 00:12:23.574 "write": true, 00:12:23.574 "unmap": true, 00:12:23.574 "flush": true, 00:12:23.574 "reset": true, 00:12:23.574 "nvme_admin": false, 00:12:23.574 "nvme_io": false, 00:12:23.574 "nvme_io_md": false, 00:12:23.574 "write_zeroes": true, 00:12:23.574 "zcopy": false, 00:12:23.574 "get_zone_info": false, 00:12:23.574 "zone_management": false, 00:12:23.574 "zone_append": false, 00:12:23.574 "compare": false, 00:12:23.574 "compare_and_write": false, 00:12:23.574 "abort": false, 00:12:23.574 "seek_hole": false, 00:12:23.574 "seek_data": false, 00:12:23.574 "copy": false, 00:12:23.574 "nvme_iov_md": false 00:12:23.574 }, 00:12:23.574 "memory_domains": [ 00:12:23.574 { 00:12:23.574 "dma_device_id": "system", 00:12:23.574 "dma_device_type": 1 00:12:23.574 }, 00:12:23.574 { 00:12:23.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.574 "dma_device_type": 2 00:12:23.574 }, 00:12:23.574 { 00:12:23.574 "dma_device_id": "system", 00:12:23.574 "dma_device_type": 1 00:12:23.574 }, 00:12:23.574 { 00:12:23.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.574 "dma_device_type": 2 00:12:23.574 } 00:12:23.574 ], 00:12:23.574 "driver_specific": { 00:12:23.575 "raid": { 00:12:23.575 "uuid": "aa60781b-6f9d-4231-9702-05a642bf4d8b", 00:12:23.575 "strip_size_kb": 64, 00:12:23.575 "state": "online", 00:12:23.575 "raid_level": "concat", 00:12:23.575 "superblock": true, 00:12:23.575 "num_base_bdevs": 2, 00:12:23.575 "num_base_bdevs_discovered": 2, 00:12:23.575 "num_base_bdevs_operational": 2, 00:12:23.575 "base_bdevs_list": [ 00:12:23.575 { 00:12:23.575 "name": "pt1", 00:12:23.575 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:23.575 "is_configured": true, 00:12:23.575 "data_offset": 2048, 00:12:23.575 "data_size": 63488 00:12:23.575 }, 00:12:23.575 { 00:12:23.575 "name": "pt2", 00:12:23.575 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:23.575 "is_configured": true, 00:12:23.575 "data_offset": 2048, 00:12:23.575 "data_size": 63488 00:12:23.575 } 00:12:23.575 ] 00:12:23.575 } 00:12:23.575 } 00:12:23.575 }' 00:12:23.575 12:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:23.575 12:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:23.575 pt2' 00:12:23.575 12:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:23.575 12:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:23.575 12:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:23.575 12:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:23.575 12:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:23.575 12:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.575 12:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.575 12:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.575 12:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:23.575 12:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:23.575 12:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:23.575 12:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:23.575 12:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:23.575 12:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.575 12:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.575 12:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.575 12:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:23.575 12:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:23.575 12:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:23.575 12:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.575 12:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.575 12:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:23.575 [2024-11-25 12:11:19.660302] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:23.835 12:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.835 12:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' aa60781b-6f9d-4231-9702-05a642bf4d8b '!=' aa60781b-6f9d-4231-9702-05a642bf4d8b ']' 00:12:23.835 12:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:12:23.835 12:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:23.835 12:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:23.835 12:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62195 00:12:23.835 12:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62195 ']' 00:12:23.835 12:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62195 00:12:23.835 12:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:23.835 12:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:23.835 12:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62195 00:12:23.835 killing process with pid 62195 00:12:23.835 12:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:23.835 12:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:23.835 12:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62195' 00:12:23.835 12:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62195 00:12:23.835 [2024-11-25 12:11:19.736903] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:23.835 [2024-11-25 12:11:19.737012] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:23.835 [2024-11-25 12:11:19.737080] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:23.835 [2024-11-25 12:11:19.737100] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:23.835 12:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62195 00:12:24.095 [2024-11-25 12:11:19.925718] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:25.032 12:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:25.032 00:12:25.032 real 0m4.863s 00:12:25.032 user 0m7.177s 00:12:25.032 sys 0m0.711s 00:12:25.032 12:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:25.032 ************************************ 00:12:25.032 END TEST raid_superblock_test 00:12:25.032 ************************************ 00:12:25.032 12:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.032 12:11:21 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:12:25.032 12:11:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:25.032 12:11:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:25.032 12:11:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:25.032 ************************************ 00:12:25.032 START TEST raid_read_error_test 00:12:25.032 ************************************ 00:12:25.032 12:11:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:12:25.032 12:11:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:12:25.032 12:11:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:12:25.032 12:11:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:25.032 12:11:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:25.032 12:11:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:25.032 12:11:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:25.032 12:11:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:25.032 12:11:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:25.032 12:11:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:25.032 12:11:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:25.032 12:11:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:25.032 12:11:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:25.032 12:11:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:25.032 12:11:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:25.032 12:11:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:25.032 12:11:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:25.032 12:11:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:25.032 12:11:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:25.032 12:11:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:12:25.032 12:11:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:25.032 12:11:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:25.032 12:11:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:25.032 12:11:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.KjR3bG3hKo 00:12:25.032 12:11:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62412 00:12:25.032 12:11:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62412 00:12:25.032 12:11:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:25.032 12:11:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62412 ']' 00:12:25.032 12:11:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:25.032 12:11:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:25.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:25.032 12:11:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:25.032 12:11:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:25.032 12:11:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.291 [2024-11-25 12:11:21.131080] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:12:25.292 [2024-11-25 12:11:21.131238] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62412 ] 00:12:25.292 [2024-11-25 12:11:21.306752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:25.551 [2024-11-25 12:11:21.441706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.809 [2024-11-25 12:11:21.652238] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:25.809 [2024-11-25 12:11:21.652333] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:26.068 12:11:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:26.068 12:11:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:26.068 12:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:26.068 12:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:26.068 12:11:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.068 12:11:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.327 BaseBdev1_malloc 00:12:26.327 12:11:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.327 12:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:26.327 12:11:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.327 12:11:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.327 true 00:12:26.327 12:11:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.327 12:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:26.327 12:11:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.327 12:11:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.327 [2024-11-25 12:11:22.193411] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:26.327 [2024-11-25 12:11:22.193492] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:26.327 [2024-11-25 12:11:22.193525] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:26.327 [2024-11-25 12:11:22.193543] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:26.327 [2024-11-25 12:11:22.196322] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:26.327 [2024-11-25 12:11:22.196397] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:26.327 BaseBdev1 00:12:26.327 12:11:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.327 12:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:26.327 12:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:26.327 12:11:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.327 12:11:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.327 BaseBdev2_malloc 00:12:26.327 12:11:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.327 12:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:26.327 12:11:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.327 12:11:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.327 true 00:12:26.327 12:11:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.327 12:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:26.327 12:11:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.327 12:11:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.327 [2024-11-25 12:11:22.252938] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:26.327 [2024-11-25 12:11:22.253020] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:26.327 [2024-11-25 12:11:22.253049] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:26.327 [2024-11-25 12:11:22.253075] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:26.327 [2024-11-25 12:11:22.256027] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:26.327 [2024-11-25 12:11:22.256089] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:26.327 BaseBdev2 00:12:26.327 12:11:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.327 12:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:12:26.327 12:11:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.327 12:11:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.327 [2024-11-25 12:11:22.261034] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:26.327 [2024-11-25 12:11:22.263576] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:26.327 [2024-11-25 12:11:22.263891] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:26.327 [2024-11-25 12:11:22.263921] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:26.327 [2024-11-25 12:11:22.264220] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:26.327 [2024-11-25 12:11:22.264479] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:26.327 [2024-11-25 12:11:22.264508] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:26.327 [2024-11-25 12:11:22.264694] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:26.327 12:11:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.327 12:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:12:26.327 12:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:26.327 12:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:26.327 12:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:26.327 12:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:26.327 12:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:26.327 12:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.327 12:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.327 12:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.327 12:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.327 12:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.327 12:11:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.327 12:11:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.327 12:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.327 12:11:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.327 12:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.327 "name": "raid_bdev1", 00:12:26.327 "uuid": "1c318b12-c284-48f7-a538-7a1ce8c5a348", 00:12:26.327 "strip_size_kb": 64, 00:12:26.327 "state": "online", 00:12:26.327 "raid_level": "concat", 00:12:26.327 "superblock": true, 00:12:26.327 "num_base_bdevs": 2, 00:12:26.327 "num_base_bdevs_discovered": 2, 00:12:26.327 "num_base_bdevs_operational": 2, 00:12:26.327 "base_bdevs_list": [ 00:12:26.327 { 00:12:26.327 "name": "BaseBdev1", 00:12:26.327 "uuid": "ea48a4fe-9fe9-5ba1-aa7a-15ff1c227cdc", 00:12:26.327 "is_configured": true, 00:12:26.327 "data_offset": 2048, 00:12:26.327 "data_size": 63488 00:12:26.327 }, 00:12:26.327 { 00:12:26.327 "name": "BaseBdev2", 00:12:26.327 "uuid": "530c04af-a310-54e1-89a9-fe37e0b4054d", 00:12:26.327 "is_configured": true, 00:12:26.327 "data_offset": 2048, 00:12:26.327 "data_size": 63488 00:12:26.327 } 00:12:26.327 ] 00:12:26.327 }' 00:12:26.327 12:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.327 12:11:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.901 12:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:26.901 12:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:26.901 [2024-11-25 12:11:22.898652] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:27.837 12:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:27.837 12:11:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.837 12:11:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.837 12:11:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.837 12:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:27.837 12:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:12:27.837 12:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:12:27.837 12:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:12:27.837 12:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:27.837 12:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:27.837 12:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:27.837 12:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:27.837 12:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:27.837 12:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.837 12:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.837 12:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.837 12:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.837 12:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.837 12:11:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.837 12:11:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.837 12:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.837 12:11:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.837 12:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.837 "name": "raid_bdev1", 00:12:27.837 "uuid": "1c318b12-c284-48f7-a538-7a1ce8c5a348", 00:12:27.837 "strip_size_kb": 64, 00:12:27.837 "state": "online", 00:12:27.837 "raid_level": "concat", 00:12:27.837 "superblock": true, 00:12:27.837 "num_base_bdevs": 2, 00:12:27.837 "num_base_bdevs_discovered": 2, 00:12:27.837 "num_base_bdevs_operational": 2, 00:12:27.837 "base_bdevs_list": [ 00:12:27.837 { 00:12:27.837 "name": "BaseBdev1", 00:12:27.837 "uuid": "ea48a4fe-9fe9-5ba1-aa7a-15ff1c227cdc", 00:12:27.837 "is_configured": true, 00:12:27.837 "data_offset": 2048, 00:12:27.837 "data_size": 63488 00:12:27.837 }, 00:12:27.837 { 00:12:27.837 "name": "BaseBdev2", 00:12:27.837 "uuid": "530c04af-a310-54e1-89a9-fe37e0b4054d", 00:12:27.837 "is_configured": true, 00:12:27.837 "data_offset": 2048, 00:12:27.837 "data_size": 63488 00:12:27.837 } 00:12:27.837 ] 00:12:27.838 }' 00:12:27.838 12:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.838 12:11:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.443 12:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:28.443 12:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.443 12:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.443 [2024-11-25 12:11:24.340510] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:28.443 [2024-11-25 12:11:24.340564] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:28.443 [2024-11-25 12:11:24.344127] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:28.443 [2024-11-25 12:11:24.344191] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:28.443 [2024-11-25 12:11:24.344234] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:28.443 [2024-11-25 12:11:24.344255] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:28.443 { 00:12:28.443 "results": [ 00:12:28.443 { 00:12:28.443 "job": "raid_bdev1", 00:12:28.443 "core_mask": "0x1", 00:12:28.443 "workload": "randrw", 00:12:28.443 "percentage": 50, 00:12:28.443 "status": "finished", 00:12:28.443 "queue_depth": 1, 00:12:28.443 "io_size": 131072, 00:12:28.443 "runtime": 1.439481, 00:12:28.443 "iops": 10371.79372287651, 00:12:28.443 "mibps": 1296.4742153595637, 00:12:28.443 "io_failed": 1, 00:12:28.443 "io_timeout": 0, 00:12:28.443 "avg_latency_us": 134.85872881923515, 00:12:28.443 "min_latency_us": 38.63272727272727, 00:12:28.443 "max_latency_us": 2040.5527272727272 00:12:28.443 } 00:12:28.443 ], 00:12:28.443 "core_count": 1 00:12:28.443 } 00:12:28.443 12:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.443 12:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62412 00:12:28.444 12:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62412 ']' 00:12:28.444 12:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62412 00:12:28.444 12:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:12:28.444 12:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:28.444 12:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62412 00:12:28.444 12:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:28.444 12:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:28.444 killing process with pid 62412 00:12:28.444 12:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62412' 00:12:28.444 12:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62412 00:12:28.444 12:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62412 00:12:28.444 [2024-11-25 12:11:24.382765] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:28.444 [2024-11-25 12:11:24.510956] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:29.841 12:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.KjR3bG3hKo 00:12:29.842 12:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:29.842 12:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:29.842 12:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.69 00:12:29.842 12:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:12:29.842 12:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:29.842 12:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:29.842 12:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.69 != \0\.\0\0 ]] 00:12:29.842 00:12:29.842 real 0m4.632s 00:12:29.842 user 0m5.790s 00:12:29.842 sys 0m0.575s 00:12:29.842 12:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:29.842 ************************************ 00:12:29.842 END TEST raid_read_error_test 00:12:29.842 ************************************ 00:12:29.842 12:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.842 12:11:25 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:12:29.842 12:11:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:29.842 12:11:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:29.842 12:11:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:29.842 ************************************ 00:12:29.842 START TEST raid_write_error_test 00:12:29.842 ************************************ 00:12:29.842 12:11:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:12:29.842 12:11:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:12:29.842 12:11:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:12:29.842 12:11:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:29.842 12:11:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:29.842 12:11:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:29.842 12:11:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:29.842 12:11:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:29.842 12:11:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:29.842 12:11:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:29.842 12:11:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:29.842 12:11:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:29.842 12:11:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:29.842 12:11:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:29.842 12:11:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:29.842 12:11:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:29.842 12:11:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:29.842 12:11:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:29.842 12:11:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:29.842 12:11:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:12:29.842 12:11:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:29.842 12:11:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:29.842 12:11:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:29.842 12:11:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.qq9satsCOk 00:12:29.842 12:11:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62554 00:12:29.842 12:11:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:29.842 12:11:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62554 00:12:29.842 12:11:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62554 ']' 00:12:29.842 12:11:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:29.842 12:11:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:29.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:29.842 12:11:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:29.842 12:11:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:29.842 12:11:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.842 [2024-11-25 12:11:25.826405] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:12:29.842 [2024-11-25 12:11:25.826582] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62554 ] 00:12:30.101 [2024-11-25 12:11:26.012217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:30.101 [2024-11-25 12:11:26.135393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:30.361 [2024-11-25 12:11:26.340949] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:30.361 [2024-11-25 12:11:26.341039] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:30.928 12:11:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:30.928 12:11:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:30.928 12:11:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:30.928 12:11:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:30.928 12:11:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.928 12:11:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.928 BaseBdev1_malloc 00:12:30.928 12:11:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.928 12:11:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:30.928 12:11:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.928 12:11:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.928 true 00:12:30.928 12:11:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.928 12:11:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:30.928 12:11:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.928 12:11:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.928 [2024-11-25 12:11:26.830606] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:30.928 [2024-11-25 12:11:26.830676] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.928 [2024-11-25 12:11:26.830709] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:30.928 [2024-11-25 12:11:26.830726] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.928 [2024-11-25 12:11:26.833590] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.928 [2024-11-25 12:11:26.833642] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:30.928 BaseBdev1 00:12:30.928 12:11:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.928 12:11:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:30.928 12:11:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:30.928 12:11:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.928 12:11:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.928 BaseBdev2_malloc 00:12:30.928 12:11:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.928 12:11:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:30.928 12:11:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.928 12:11:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.928 true 00:12:30.928 12:11:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.928 12:11:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:30.928 12:11:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.928 12:11:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.928 [2024-11-25 12:11:26.891005] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:30.928 [2024-11-25 12:11:26.891071] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.928 [2024-11-25 12:11:26.891100] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:30.928 [2024-11-25 12:11:26.891117] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.928 [2024-11-25 12:11:26.893943] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.928 [2024-11-25 12:11:26.893986] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:30.928 BaseBdev2 00:12:30.928 12:11:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.928 12:11:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:12:30.928 12:11:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.928 12:11:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.928 [2024-11-25 12:11:26.899087] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:30.928 [2024-11-25 12:11:26.901589] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:30.928 [2024-11-25 12:11:26.901839] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:30.928 [2024-11-25 12:11:26.901872] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:30.928 [2024-11-25 12:11:26.902176] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:30.928 [2024-11-25 12:11:26.902431] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:30.928 [2024-11-25 12:11:26.902460] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:30.928 [2024-11-25 12:11:26.902652] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:30.928 12:11:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.928 12:11:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:12:30.928 12:11:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:30.928 12:11:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:30.928 12:11:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:30.928 12:11:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:30.928 12:11:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:30.928 12:11:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.928 12:11:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.928 12:11:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.928 12:11:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.928 12:11:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.928 12:11:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.928 12:11:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.928 12:11:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.928 12:11:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.928 12:11:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.928 "name": "raid_bdev1", 00:12:30.928 "uuid": "4059eaa8-fc99-4e68-ae4b-7f075fbaed90", 00:12:30.928 "strip_size_kb": 64, 00:12:30.928 "state": "online", 00:12:30.928 "raid_level": "concat", 00:12:30.929 "superblock": true, 00:12:30.929 "num_base_bdevs": 2, 00:12:30.929 "num_base_bdevs_discovered": 2, 00:12:30.929 "num_base_bdevs_operational": 2, 00:12:30.929 "base_bdevs_list": [ 00:12:30.929 { 00:12:30.929 "name": "BaseBdev1", 00:12:30.929 "uuid": "ac4e90d4-7bc8-5186-b6c3-9490219915f0", 00:12:30.929 "is_configured": true, 00:12:30.929 "data_offset": 2048, 00:12:30.929 "data_size": 63488 00:12:30.929 }, 00:12:30.929 { 00:12:30.929 "name": "BaseBdev2", 00:12:30.929 "uuid": "4a5f805d-56ed-5538-b709-8d1df4e5f213", 00:12:30.929 "is_configured": true, 00:12:30.929 "data_offset": 2048, 00:12:30.929 "data_size": 63488 00:12:30.929 } 00:12:30.929 ] 00:12:30.929 }' 00:12:30.929 12:11:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.929 12:11:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.501 12:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:31.501 12:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:31.501 [2024-11-25 12:11:27.560618] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:32.462 12:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:32.462 12:11:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.462 12:11:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.462 12:11:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.462 12:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:32.462 12:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:12:32.462 12:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:12:32.462 12:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:12:32.462 12:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:32.462 12:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:32.462 12:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:32.462 12:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:32.462 12:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:32.462 12:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.462 12:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.462 12:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.462 12:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.462 12:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.462 12:11:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.462 12:11:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.462 12:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.462 12:11:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.462 12:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.462 "name": "raid_bdev1", 00:12:32.463 "uuid": "4059eaa8-fc99-4e68-ae4b-7f075fbaed90", 00:12:32.463 "strip_size_kb": 64, 00:12:32.463 "state": "online", 00:12:32.463 "raid_level": "concat", 00:12:32.463 "superblock": true, 00:12:32.463 "num_base_bdevs": 2, 00:12:32.463 "num_base_bdevs_discovered": 2, 00:12:32.463 "num_base_bdevs_operational": 2, 00:12:32.463 "base_bdevs_list": [ 00:12:32.463 { 00:12:32.463 "name": "BaseBdev1", 00:12:32.463 "uuid": "ac4e90d4-7bc8-5186-b6c3-9490219915f0", 00:12:32.463 "is_configured": true, 00:12:32.463 "data_offset": 2048, 00:12:32.463 "data_size": 63488 00:12:32.463 }, 00:12:32.463 { 00:12:32.463 "name": "BaseBdev2", 00:12:32.463 "uuid": "4a5f805d-56ed-5538-b709-8d1df4e5f213", 00:12:32.463 "is_configured": true, 00:12:32.463 "data_offset": 2048, 00:12:32.463 "data_size": 63488 00:12:32.463 } 00:12:32.463 ] 00:12:32.463 }' 00:12:32.463 12:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.463 12:11:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.061 12:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:33.061 12:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.061 12:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.061 [2024-11-25 12:11:29.032094] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:33.061 [2024-11-25 12:11:29.032140] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:33.061 [2024-11-25 12:11:29.035501] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:33.061 [2024-11-25 12:11:29.035566] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:33.061 [2024-11-25 12:11:29.035612] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:33.061 [2024-11-25 12:11:29.035634] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:33.061 { 00:12:33.061 "results": [ 00:12:33.061 { 00:12:33.061 "job": "raid_bdev1", 00:12:33.061 "core_mask": "0x1", 00:12:33.061 "workload": "randrw", 00:12:33.061 "percentage": 50, 00:12:33.061 "status": "finished", 00:12:33.061 "queue_depth": 1, 00:12:33.061 "io_size": 131072, 00:12:33.061 "runtime": 1.469083, 00:12:33.061 "iops": 10962.62090024866, 00:12:33.061 "mibps": 1370.3276125310824, 00:12:33.061 "io_failed": 1, 00:12:33.061 "io_timeout": 0, 00:12:33.061 "avg_latency_us": 127.43146111556393, 00:12:33.061 "min_latency_us": 41.658181818181816, 00:12:33.061 "max_latency_us": 1876.7127272727273 00:12:33.061 } 00:12:33.061 ], 00:12:33.061 "core_count": 1 00:12:33.061 } 00:12:33.061 12:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.061 12:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62554 00:12:33.061 12:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62554 ']' 00:12:33.061 12:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62554 00:12:33.061 12:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:33.061 12:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:33.061 12:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62554 00:12:33.061 12:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:33.061 12:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:33.061 killing process with pid 62554 00:12:33.061 12:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62554' 00:12:33.061 12:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62554 00:12:33.061 [2024-11-25 12:11:29.072817] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:33.061 12:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62554 00:12:33.323 [2024-11-25 12:11:29.198777] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:34.296 12:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.qq9satsCOk 00:12:34.296 12:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:34.296 12:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:34.296 12:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.68 00:12:34.296 12:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:12:34.296 12:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:34.296 12:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:34.296 12:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.68 != \0\.\0\0 ]] 00:12:34.296 00:12:34.296 real 0m4.584s 00:12:34.296 user 0m5.783s 00:12:34.296 sys 0m0.576s 00:12:34.296 12:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:34.296 ************************************ 00:12:34.296 12:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.296 END TEST raid_write_error_test 00:12:34.296 ************************************ 00:12:34.296 12:11:30 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:34.296 12:11:30 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:12:34.296 12:11:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:34.296 12:11:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:34.296 12:11:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:34.296 ************************************ 00:12:34.296 START TEST raid_state_function_test 00:12:34.296 ************************************ 00:12:34.296 12:11:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:12:34.296 12:11:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:34.296 12:11:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:12:34.296 12:11:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:34.296 12:11:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:34.296 12:11:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:34.296 12:11:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:34.296 12:11:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:34.297 12:11:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:34.297 12:11:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:34.297 12:11:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:34.297 12:11:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:34.297 12:11:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:34.297 12:11:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:34.297 12:11:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:34.297 12:11:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:34.297 12:11:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:34.297 12:11:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:34.297 12:11:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:34.297 12:11:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:34.297 12:11:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:34.297 12:11:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:34.297 12:11:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:34.297 12:11:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62703 00:12:34.297 12:11:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:34.297 12:11:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62703' 00:12:34.297 Process raid pid: 62703 00:12:34.297 12:11:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62703 00:12:34.297 12:11:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62703 ']' 00:12:34.297 12:11:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:34.297 12:11:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:34.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:34.297 12:11:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:34.297 12:11:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:34.297 12:11:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.557 [2024-11-25 12:11:30.502075] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:12:34.557 [2024-11-25 12:11:30.503301] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:34.816 [2024-11-25 12:11:30.700077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:34.816 [2024-11-25 12:11:30.827515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.075 [2024-11-25 12:11:31.035951] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:35.075 [2024-11-25 12:11:31.036023] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:35.642 12:11:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:35.642 12:11:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:12:35.642 12:11:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:35.642 12:11:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.642 12:11:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.642 [2024-11-25 12:11:31.428006] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:35.642 [2024-11-25 12:11:31.428067] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:35.642 [2024-11-25 12:11:31.428085] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:35.642 [2024-11-25 12:11:31.428102] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:35.642 12:11:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.642 12:11:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:12:35.642 12:11:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:35.642 12:11:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:35.642 12:11:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:35.642 12:11:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:35.642 12:11:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:35.642 12:11:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.642 12:11:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.642 12:11:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.642 12:11:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.643 12:11:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.643 12:11:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:35.643 12:11:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.643 12:11:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.643 12:11:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.643 12:11:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.643 "name": "Existed_Raid", 00:12:35.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.643 "strip_size_kb": 0, 00:12:35.643 "state": "configuring", 00:12:35.643 "raid_level": "raid1", 00:12:35.643 "superblock": false, 00:12:35.643 "num_base_bdevs": 2, 00:12:35.643 "num_base_bdevs_discovered": 0, 00:12:35.643 "num_base_bdevs_operational": 2, 00:12:35.643 "base_bdevs_list": [ 00:12:35.643 { 00:12:35.643 "name": "BaseBdev1", 00:12:35.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.643 "is_configured": false, 00:12:35.643 "data_offset": 0, 00:12:35.643 "data_size": 0 00:12:35.643 }, 00:12:35.643 { 00:12:35.643 "name": "BaseBdev2", 00:12:35.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.643 "is_configured": false, 00:12:35.643 "data_offset": 0, 00:12:35.643 "data_size": 0 00:12:35.643 } 00:12:35.643 ] 00:12:35.643 }' 00:12:35.643 12:11:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.643 12:11:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.901 12:11:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:35.901 12:11:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.901 12:11:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.901 [2024-11-25 12:11:31.928102] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:35.901 [2024-11-25 12:11:31.928148] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:35.901 12:11:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.901 12:11:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:35.901 12:11:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.901 12:11:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.901 [2024-11-25 12:11:31.936060] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:35.901 [2024-11-25 12:11:31.936112] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:35.901 [2024-11-25 12:11:31.936131] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:35.901 [2024-11-25 12:11:31.936150] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:35.901 12:11:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.901 12:11:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:35.901 12:11:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.901 12:11:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.901 [2024-11-25 12:11:31.980691] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:35.901 BaseBdev1 00:12:35.901 12:11:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.902 12:11:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:35.902 12:11:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:35.902 12:11:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:35.902 12:11:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:35.902 12:11:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:35.902 12:11:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:35.902 12:11:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:35.902 12:11:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.902 12:11:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.161 12:11:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.161 12:11:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:36.161 12:11:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.161 12:11:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.161 [ 00:12:36.161 { 00:12:36.161 "name": "BaseBdev1", 00:12:36.161 "aliases": [ 00:12:36.161 "16e0a5d7-f9d2-45ec-ae95-48d7a223c626" 00:12:36.161 ], 00:12:36.161 "product_name": "Malloc disk", 00:12:36.161 "block_size": 512, 00:12:36.161 "num_blocks": 65536, 00:12:36.161 "uuid": "16e0a5d7-f9d2-45ec-ae95-48d7a223c626", 00:12:36.161 "assigned_rate_limits": { 00:12:36.161 "rw_ios_per_sec": 0, 00:12:36.161 "rw_mbytes_per_sec": 0, 00:12:36.161 "r_mbytes_per_sec": 0, 00:12:36.161 "w_mbytes_per_sec": 0 00:12:36.161 }, 00:12:36.161 "claimed": true, 00:12:36.161 "claim_type": "exclusive_write", 00:12:36.161 "zoned": false, 00:12:36.161 "supported_io_types": { 00:12:36.161 "read": true, 00:12:36.161 "write": true, 00:12:36.161 "unmap": true, 00:12:36.161 "flush": true, 00:12:36.161 "reset": true, 00:12:36.161 "nvme_admin": false, 00:12:36.161 "nvme_io": false, 00:12:36.161 "nvme_io_md": false, 00:12:36.161 "write_zeroes": true, 00:12:36.161 "zcopy": true, 00:12:36.161 "get_zone_info": false, 00:12:36.161 "zone_management": false, 00:12:36.161 "zone_append": false, 00:12:36.161 "compare": false, 00:12:36.161 "compare_and_write": false, 00:12:36.161 "abort": true, 00:12:36.161 "seek_hole": false, 00:12:36.161 "seek_data": false, 00:12:36.161 "copy": true, 00:12:36.161 "nvme_iov_md": false 00:12:36.161 }, 00:12:36.161 "memory_domains": [ 00:12:36.161 { 00:12:36.161 "dma_device_id": "system", 00:12:36.161 "dma_device_type": 1 00:12:36.161 }, 00:12:36.161 { 00:12:36.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.161 "dma_device_type": 2 00:12:36.161 } 00:12:36.161 ], 00:12:36.161 "driver_specific": {} 00:12:36.161 } 00:12:36.161 ] 00:12:36.161 12:11:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.161 12:11:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:36.161 12:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:12:36.161 12:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:36.161 12:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:36.161 12:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:36.161 12:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:36.161 12:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:36.161 12:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.161 12:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.161 12:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.161 12:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.161 12:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.161 12:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:36.161 12:11:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.161 12:11:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.161 12:11:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.161 12:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.161 "name": "Existed_Raid", 00:12:36.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.161 "strip_size_kb": 0, 00:12:36.161 "state": "configuring", 00:12:36.161 "raid_level": "raid1", 00:12:36.161 "superblock": false, 00:12:36.161 "num_base_bdevs": 2, 00:12:36.161 "num_base_bdevs_discovered": 1, 00:12:36.161 "num_base_bdevs_operational": 2, 00:12:36.161 "base_bdevs_list": [ 00:12:36.161 { 00:12:36.161 "name": "BaseBdev1", 00:12:36.161 "uuid": "16e0a5d7-f9d2-45ec-ae95-48d7a223c626", 00:12:36.161 "is_configured": true, 00:12:36.161 "data_offset": 0, 00:12:36.161 "data_size": 65536 00:12:36.161 }, 00:12:36.161 { 00:12:36.161 "name": "BaseBdev2", 00:12:36.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.161 "is_configured": false, 00:12:36.161 "data_offset": 0, 00:12:36.161 "data_size": 0 00:12:36.161 } 00:12:36.161 ] 00:12:36.161 }' 00:12:36.161 12:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.161 12:11:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.729 12:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:36.729 12:11:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.729 12:11:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.729 [2024-11-25 12:11:32.540898] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:36.730 [2024-11-25 12:11:32.540959] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:36.730 12:11:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.730 12:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:36.730 12:11:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.730 12:11:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.730 [2024-11-25 12:11:32.548927] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:36.730 [2024-11-25 12:11:32.551437] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:36.730 [2024-11-25 12:11:32.551490] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:36.730 12:11:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.730 12:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:36.730 12:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:36.730 12:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:12:36.730 12:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:36.730 12:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:36.730 12:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:36.730 12:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:36.730 12:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:36.730 12:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.730 12:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.730 12:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.730 12:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.730 12:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.730 12:11:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.730 12:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:36.730 12:11:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.730 12:11:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.730 12:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.730 "name": "Existed_Raid", 00:12:36.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.730 "strip_size_kb": 0, 00:12:36.730 "state": "configuring", 00:12:36.730 "raid_level": "raid1", 00:12:36.730 "superblock": false, 00:12:36.730 "num_base_bdevs": 2, 00:12:36.730 "num_base_bdevs_discovered": 1, 00:12:36.730 "num_base_bdevs_operational": 2, 00:12:36.730 "base_bdevs_list": [ 00:12:36.730 { 00:12:36.730 "name": "BaseBdev1", 00:12:36.730 "uuid": "16e0a5d7-f9d2-45ec-ae95-48d7a223c626", 00:12:36.730 "is_configured": true, 00:12:36.730 "data_offset": 0, 00:12:36.730 "data_size": 65536 00:12:36.730 }, 00:12:36.730 { 00:12:36.730 "name": "BaseBdev2", 00:12:36.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.730 "is_configured": false, 00:12:36.730 "data_offset": 0, 00:12:36.730 "data_size": 0 00:12:36.730 } 00:12:36.730 ] 00:12:36.730 }' 00:12:36.730 12:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.730 12:11:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.989 12:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:36.989 12:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.989 12:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.248 [2024-11-25 12:11:33.091965] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:37.248 [2024-11-25 12:11:33.092031] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:37.248 [2024-11-25 12:11:33.092044] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:37.248 [2024-11-25 12:11:33.092406] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:37.248 [2024-11-25 12:11:33.092624] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:37.248 [2024-11-25 12:11:33.092657] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:37.248 [2024-11-25 12:11:33.092968] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:37.248 BaseBdev2 00:12:37.248 12:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.248 12:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:37.248 12:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:37.248 12:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:37.248 12:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:37.248 12:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:37.248 12:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:37.248 12:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:37.248 12:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.248 12:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.248 12:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.248 12:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:37.248 12:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.248 12:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.248 [ 00:12:37.248 { 00:12:37.248 "name": "BaseBdev2", 00:12:37.248 "aliases": [ 00:12:37.248 "cb91ed4f-270b-4cca-ac45-00a69a77e114" 00:12:37.248 ], 00:12:37.248 "product_name": "Malloc disk", 00:12:37.248 "block_size": 512, 00:12:37.248 "num_blocks": 65536, 00:12:37.248 "uuid": "cb91ed4f-270b-4cca-ac45-00a69a77e114", 00:12:37.248 "assigned_rate_limits": { 00:12:37.248 "rw_ios_per_sec": 0, 00:12:37.248 "rw_mbytes_per_sec": 0, 00:12:37.248 "r_mbytes_per_sec": 0, 00:12:37.248 "w_mbytes_per_sec": 0 00:12:37.248 }, 00:12:37.248 "claimed": true, 00:12:37.248 "claim_type": "exclusive_write", 00:12:37.248 "zoned": false, 00:12:37.248 "supported_io_types": { 00:12:37.248 "read": true, 00:12:37.248 "write": true, 00:12:37.248 "unmap": true, 00:12:37.248 "flush": true, 00:12:37.248 "reset": true, 00:12:37.248 "nvme_admin": false, 00:12:37.248 "nvme_io": false, 00:12:37.248 "nvme_io_md": false, 00:12:37.248 "write_zeroes": true, 00:12:37.248 "zcopy": true, 00:12:37.248 "get_zone_info": false, 00:12:37.248 "zone_management": false, 00:12:37.248 "zone_append": false, 00:12:37.248 "compare": false, 00:12:37.248 "compare_and_write": false, 00:12:37.248 "abort": true, 00:12:37.248 "seek_hole": false, 00:12:37.249 "seek_data": false, 00:12:37.249 "copy": true, 00:12:37.249 "nvme_iov_md": false 00:12:37.249 }, 00:12:37.249 "memory_domains": [ 00:12:37.249 { 00:12:37.249 "dma_device_id": "system", 00:12:37.249 "dma_device_type": 1 00:12:37.249 }, 00:12:37.249 { 00:12:37.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.249 "dma_device_type": 2 00:12:37.249 } 00:12:37.249 ], 00:12:37.249 "driver_specific": {} 00:12:37.249 } 00:12:37.249 ] 00:12:37.249 12:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.249 12:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:37.249 12:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:37.249 12:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:37.249 12:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:12:37.249 12:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:37.249 12:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:37.249 12:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:37.249 12:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:37.249 12:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:37.249 12:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.249 12:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.249 12:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.249 12:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.249 12:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.249 12:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:37.249 12:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.249 12:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.249 12:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.249 12:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.249 "name": "Existed_Raid", 00:12:37.249 "uuid": "45a26a74-9d3a-4035-b77e-28fd9425c0aa", 00:12:37.249 "strip_size_kb": 0, 00:12:37.249 "state": "online", 00:12:37.249 "raid_level": "raid1", 00:12:37.249 "superblock": false, 00:12:37.249 "num_base_bdevs": 2, 00:12:37.249 "num_base_bdevs_discovered": 2, 00:12:37.249 "num_base_bdevs_operational": 2, 00:12:37.249 "base_bdevs_list": [ 00:12:37.249 { 00:12:37.249 "name": "BaseBdev1", 00:12:37.249 "uuid": "16e0a5d7-f9d2-45ec-ae95-48d7a223c626", 00:12:37.249 "is_configured": true, 00:12:37.249 "data_offset": 0, 00:12:37.249 "data_size": 65536 00:12:37.249 }, 00:12:37.249 { 00:12:37.249 "name": "BaseBdev2", 00:12:37.249 "uuid": "cb91ed4f-270b-4cca-ac45-00a69a77e114", 00:12:37.249 "is_configured": true, 00:12:37.249 "data_offset": 0, 00:12:37.249 "data_size": 65536 00:12:37.249 } 00:12:37.249 ] 00:12:37.249 }' 00:12:37.249 12:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.249 12:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.846 12:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:37.846 12:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:37.846 12:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:37.846 12:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:37.846 12:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:37.846 12:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:37.846 12:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:37.846 12:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:37.846 12:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.846 12:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.846 [2024-11-25 12:11:33.624648] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:37.846 12:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.846 12:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:37.846 "name": "Existed_Raid", 00:12:37.846 "aliases": [ 00:12:37.846 "45a26a74-9d3a-4035-b77e-28fd9425c0aa" 00:12:37.846 ], 00:12:37.846 "product_name": "Raid Volume", 00:12:37.846 "block_size": 512, 00:12:37.846 "num_blocks": 65536, 00:12:37.846 "uuid": "45a26a74-9d3a-4035-b77e-28fd9425c0aa", 00:12:37.846 "assigned_rate_limits": { 00:12:37.846 "rw_ios_per_sec": 0, 00:12:37.846 "rw_mbytes_per_sec": 0, 00:12:37.846 "r_mbytes_per_sec": 0, 00:12:37.846 "w_mbytes_per_sec": 0 00:12:37.846 }, 00:12:37.846 "claimed": false, 00:12:37.846 "zoned": false, 00:12:37.846 "supported_io_types": { 00:12:37.846 "read": true, 00:12:37.846 "write": true, 00:12:37.846 "unmap": false, 00:12:37.846 "flush": false, 00:12:37.846 "reset": true, 00:12:37.846 "nvme_admin": false, 00:12:37.846 "nvme_io": false, 00:12:37.846 "nvme_io_md": false, 00:12:37.846 "write_zeroes": true, 00:12:37.846 "zcopy": false, 00:12:37.846 "get_zone_info": false, 00:12:37.846 "zone_management": false, 00:12:37.846 "zone_append": false, 00:12:37.846 "compare": false, 00:12:37.846 "compare_and_write": false, 00:12:37.846 "abort": false, 00:12:37.846 "seek_hole": false, 00:12:37.846 "seek_data": false, 00:12:37.846 "copy": false, 00:12:37.846 "nvme_iov_md": false 00:12:37.846 }, 00:12:37.846 "memory_domains": [ 00:12:37.846 { 00:12:37.846 "dma_device_id": "system", 00:12:37.846 "dma_device_type": 1 00:12:37.846 }, 00:12:37.846 { 00:12:37.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.846 "dma_device_type": 2 00:12:37.846 }, 00:12:37.846 { 00:12:37.846 "dma_device_id": "system", 00:12:37.846 "dma_device_type": 1 00:12:37.846 }, 00:12:37.846 { 00:12:37.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.846 "dma_device_type": 2 00:12:37.846 } 00:12:37.846 ], 00:12:37.846 "driver_specific": { 00:12:37.846 "raid": { 00:12:37.846 "uuid": "45a26a74-9d3a-4035-b77e-28fd9425c0aa", 00:12:37.846 "strip_size_kb": 0, 00:12:37.846 "state": "online", 00:12:37.846 "raid_level": "raid1", 00:12:37.846 "superblock": false, 00:12:37.846 "num_base_bdevs": 2, 00:12:37.846 "num_base_bdevs_discovered": 2, 00:12:37.846 "num_base_bdevs_operational": 2, 00:12:37.847 "base_bdevs_list": [ 00:12:37.847 { 00:12:37.847 "name": "BaseBdev1", 00:12:37.847 "uuid": "16e0a5d7-f9d2-45ec-ae95-48d7a223c626", 00:12:37.847 "is_configured": true, 00:12:37.847 "data_offset": 0, 00:12:37.847 "data_size": 65536 00:12:37.847 }, 00:12:37.847 { 00:12:37.847 "name": "BaseBdev2", 00:12:37.847 "uuid": "cb91ed4f-270b-4cca-ac45-00a69a77e114", 00:12:37.847 "is_configured": true, 00:12:37.847 "data_offset": 0, 00:12:37.847 "data_size": 65536 00:12:37.847 } 00:12:37.847 ] 00:12:37.847 } 00:12:37.847 } 00:12:37.847 }' 00:12:37.847 12:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:37.847 12:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:37.847 BaseBdev2' 00:12:37.847 12:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:37.847 12:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:37.847 12:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:37.847 12:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:37.847 12:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:37.847 12:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.847 12:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.847 12:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.847 12:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:37.847 12:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:37.847 12:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:37.847 12:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:37.847 12:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.847 12:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:37.847 12:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.847 12:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.847 12:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:37.847 12:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:37.847 12:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:37.847 12:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.847 12:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.847 [2024-11-25 12:11:33.860268] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:38.106 12:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.106 12:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:38.106 12:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:38.106 12:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:38.106 12:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:38.106 12:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:38.106 12:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:12:38.106 12:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:38.106 12:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:38.106 12:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:38.106 12:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:38.106 12:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:38.106 12:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.106 12:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.106 12:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.106 12:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.106 12:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.106 12:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:38.106 12:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.106 12:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.106 12:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.106 12:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.106 "name": "Existed_Raid", 00:12:38.106 "uuid": "45a26a74-9d3a-4035-b77e-28fd9425c0aa", 00:12:38.106 "strip_size_kb": 0, 00:12:38.106 "state": "online", 00:12:38.106 "raid_level": "raid1", 00:12:38.106 "superblock": false, 00:12:38.106 "num_base_bdevs": 2, 00:12:38.106 "num_base_bdevs_discovered": 1, 00:12:38.106 "num_base_bdevs_operational": 1, 00:12:38.106 "base_bdevs_list": [ 00:12:38.106 { 00:12:38.106 "name": null, 00:12:38.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.106 "is_configured": false, 00:12:38.106 "data_offset": 0, 00:12:38.106 "data_size": 65536 00:12:38.106 }, 00:12:38.106 { 00:12:38.106 "name": "BaseBdev2", 00:12:38.106 "uuid": "cb91ed4f-270b-4cca-ac45-00a69a77e114", 00:12:38.106 "is_configured": true, 00:12:38.106 "data_offset": 0, 00:12:38.106 "data_size": 65536 00:12:38.106 } 00:12:38.106 ] 00:12:38.107 }' 00:12:38.107 12:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.107 12:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.366 12:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:38.366 12:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:38.366 12:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:38.366 12:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.366 12:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.366 12:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.366 12:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.629 12:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:38.629 12:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:38.629 12:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:38.629 12:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.629 12:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.629 [2024-11-25 12:11:34.480016] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:38.629 [2024-11-25 12:11:34.480135] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:38.629 [2024-11-25 12:11:34.565801] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:38.629 [2024-11-25 12:11:34.565869] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:38.629 [2024-11-25 12:11:34.565890] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:38.629 12:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.629 12:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:38.629 12:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:38.629 12:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.629 12:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:38.629 12:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.629 12:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.629 12:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.629 12:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:38.629 12:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:38.629 12:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:12:38.629 12:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62703 00:12:38.629 12:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62703 ']' 00:12:38.629 12:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62703 00:12:38.629 12:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:12:38.629 12:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:38.629 12:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62703 00:12:38.629 12:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:38.629 12:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:38.629 killing process with pid 62703 00:12:38.629 12:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62703' 00:12:38.629 12:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62703 00:12:38.629 [2024-11-25 12:11:34.645875] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:38.629 12:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62703 00:12:38.629 [2024-11-25 12:11:34.660544] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:40.024 12:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:40.024 00:12:40.024 real 0m5.342s 00:12:40.024 user 0m8.026s 00:12:40.024 sys 0m0.789s 00:12:40.024 12:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:40.024 12:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.024 ************************************ 00:12:40.024 END TEST raid_state_function_test 00:12:40.024 ************************************ 00:12:40.024 12:11:35 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:12:40.024 12:11:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:40.024 12:11:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:40.024 12:11:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:40.024 ************************************ 00:12:40.024 START TEST raid_state_function_test_sb 00:12:40.024 ************************************ 00:12:40.024 12:11:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:12:40.024 12:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:40.024 12:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:12:40.024 12:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:40.024 12:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:40.024 12:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:40.024 12:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:40.024 12:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:40.024 12:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:40.024 12:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:40.024 12:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:40.024 12:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:40.024 12:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:40.024 12:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:40.024 12:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:40.024 12:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:40.024 12:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:40.024 12:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:40.024 12:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:40.024 12:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:40.024 12:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:40.024 12:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:40.024 12:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:40.024 12:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62956 00:12:40.024 12:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62956' 00:12:40.024 12:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:40.024 Process raid pid: 62956 00:12:40.024 12:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62956 00:12:40.024 12:11:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62956 ']' 00:12:40.024 12:11:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:40.024 12:11:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:40.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:40.024 12:11:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:40.024 12:11:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:40.024 12:11:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.024 [2024-11-25 12:11:35.846598] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:12:40.024 [2024-11-25 12:11:35.846777] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:40.024 [2024-11-25 12:11:36.032409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:40.283 [2024-11-25 12:11:36.160536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:40.283 [2024-11-25 12:11:36.364048] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:40.283 [2024-11-25 12:11:36.364105] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:40.859 12:11:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:40.859 12:11:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:40.859 12:11:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:40.859 12:11:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.859 12:11:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.859 [2024-11-25 12:11:36.844415] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:40.859 [2024-11-25 12:11:36.844476] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:40.859 [2024-11-25 12:11:36.844493] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:40.859 [2024-11-25 12:11:36.844509] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:40.859 12:11:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.859 12:11:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:12:40.859 12:11:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:40.859 12:11:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:40.859 12:11:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:40.859 12:11:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:40.859 12:11:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:40.859 12:11:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.859 12:11:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.859 12:11:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.859 12:11:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.859 12:11:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.859 12:11:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.859 12:11:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.859 12:11:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:40.859 12:11:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.859 12:11:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.859 "name": "Existed_Raid", 00:12:40.859 "uuid": "e1ade15a-9b15-40b4-9516-106999c223f7", 00:12:40.859 "strip_size_kb": 0, 00:12:40.859 "state": "configuring", 00:12:40.859 "raid_level": "raid1", 00:12:40.859 "superblock": true, 00:12:40.859 "num_base_bdevs": 2, 00:12:40.859 "num_base_bdevs_discovered": 0, 00:12:40.859 "num_base_bdevs_operational": 2, 00:12:40.859 "base_bdevs_list": [ 00:12:40.859 { 00:12:40.859 "name": "BaseBdev1", 00:12:40.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.859 "is_configured": false, 00:12:40.859 "data_offset": 0, 00:12:40.859 "data_size": 0 00:12:40.859 }, 00:12:40.859 { 00:12:40.859 "name": "BaseBdev2", 00:12:40.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.859 "is_configured": false, 00:12:40.859 "data_offset": 0, 00:12:40.859 "data_size": 0 00:12:40.859 } 00:12:40.859 ] 00:12:40.859 }' 00:12:40.859 12:11:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.859 12:11:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.449 12:11:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:41.449 12:11:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.449 12:11:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.449 [2024-11-25 12:11:37.360497] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:41.449 [2024-11-25 12:11:37.360541] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:41.449 12:11:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.449 12:11:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:41.449 12:11:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.449 12:11:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.449 [2024-11-25 12:11:37.368462] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:41.449 [2024-11-25 12:11:37.368513] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:41.449 [2024-11-25 12:11:37.368527] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:41.449 [2024-11-25 12:11:37.368546] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:41.449 12:11:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.449 12:11:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:41.449 12:11:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.449 12:11:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.449 [2024-11-25 12:11:37.413167] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:41.449 BaseBdev1 00:12:41.449 12:11:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.449 12:11:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:41.449 12:11:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:41.449 12:11:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:41.449 12:11:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:41.450 12:11:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:41.450 12:11:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:41.450 12:11:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:41.450 12:11:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.450 12:11:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.450 12:11:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.450 12:11:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:41.450 12:11:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.450 12:11:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.450 [ 00:12:41.450 { 00:12:41.450 "name": "BaseBdev1", 00:12:41.450 "aliases": [ 00:12:41.450 "8a34c890-64a7-43bc-8365-45729c1a907f" 00:12:41.450 ], 00:12:41.450 "product_name": "Malloc disk", 00:12:41.450 "block_size": 512, 00:12:41.450 "num_blocks": 65536, 00:12:41.450 "uuid": "8a34c890-64a7-43bc-8365-45729c1a907f", 00:12:41.450 "assigned_rate_limits": { 00:12:41.450 "rw_ios_per_sec": 0, 00:12:41.450 "rw_mbytes_per_sec": 0, 00:12:41.450 "r_mbytes_per_sec": 0, 00:12:41.450 "w_mbytes_per_sec": 0 00:12:41.450 }, 00:12:41.450 "claimed": true, 00:12:41.450 "claim_type": "exclusive_write", 00:12:41.450 "zoned": false, 00:12:41.450 "supported_io_types": { 00:12:41.450 "read": true, 00:12:41.450 "write": true, 00:12:41.450 "unmap": true, 00:12:41.450 "flush": true, 00:12:41.450 "reset": true, 00:12:41.450 "nvme_admin": false, 00:12:41.450 "nvme_io": false, 00:12:41.450 "nvme_io_md": false, 00:12:41.450 "write_zeroes": true, 00:12:41.450 "zcopy": true, 00:12:41.450 "get_zone_info": false, 00:12:41.450 "zone_management": false, 00:12:41.450 "zone_append": false, 00:12:41.450 "compare": false, 00:12:41.450 "compare_and_write": false, 00:12:41.450 "abort": true, 00:12:41.450 "seek_hole": false, 00:12:41.450 "seek_data": false, 00:12:41.450 "copy": true, 00:12:41.450 "nvme_iov_md": false 00:12:41.450 }, 00:12:41.450 "memory_domains": [ 00:12:41.450 { 00:12:41.450 "dma_device_id": "system", 00:12:41.450 "dma_device_type": 1 00:12:41.450 }, 00:12:41.450 { 00:12:41.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:41.450 "dma_device_type": 2 00:12:41.450 } 00:12:41.450 ], 00:12:41.450 "driver_specific": {} 00:12:41.450 } 00:12:41.450 ] 00:12:41.450 12:11:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.450 12:11:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:41.450 12:11:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:12:41.450 12:11:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:41.450 12:11:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:41.450 12:11:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:41.450 12:11:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:41.450 12:11:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:41.450 12:11:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.450 12:11:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.450 12:11:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.450 12:11:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.450 12:11:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.450 12:11:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:41.450 12:11:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.450 12:11:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.450 12:11:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.450 12:11:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.450 "name": "Existed_Raid", 00:12:41.450 "uuid": "2e4e68df-81cc-442f-9655-a22d898aab8e", 00:12:41.450 "strip_size_kb": 0, 00:12:41.450 "state": "configuring", 00:12:41.450 "raid_level": "raid1", 00:12:41.450 "superblock": true, 00:12:41.450 "num_base_bdevs": 2, 00:12:41.450 "num_base_bdevs_discovered": 1, 00:12:41.450 "num_base_bdevs_operational": 2, 00:12:41.450 "base_bdevs_list": [ 00:12:41.450 { 00:12:41.450 "name": "BaseBdev1", 00:12:41.450 "uuid": "8a34c890-64a7-43bc-8365-45729c1a907f", 00:12:41.450 "is_configured": true, 00:12:41.450 "data_offset": 2048, 00:12:41.450 "data_size": 63488 00:12:41.450 }, 00:12:41.450 { 00:12:41.450 "name": "BaseBdev2", 00:12:41.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.450 "is_configured": false, 00:12:41.450 "data_offset": 0, 00:12:41.450 "data_size": 0 00:12:41.450 } 00:12:41.450 ] 00:12:41.450 }' 00:12:41.450 12:11:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.450 12:11:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.020 12:11:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:42.020 12:11:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.020 12:11:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.020 [2024-11-25 12:11:37.945359] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:42.020 [2024-11-25 12:11:37.945434] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:42.020 12:11:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.020 12:11:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:42.020 12:11:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.020 12:11:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.020 [2024-11-25 12:11:37.953415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:42.020 [2024-11-25 12:11:37.955840] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:42.020 [2024-11-25 12:11:37.955898] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:42.020 12:11:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.020 12:11:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:42.020 12:11:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:42.020 12:11:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:12:42.020 12:11:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:42.020 12:11:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:42.020 12:11:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:42.020 12:11:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:42.020 12:11:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:42.020 12:11:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:42.020 12:11:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:42.020 12:11:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:42.020 12:11:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:42.020 12:11:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.020 12:11:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:42.020 12:11:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.020 12:11:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.020 12:11:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.020 12:11:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:42.020 "name": "Existed_Raid", 00:12:42.020 "uuid": "1e2412bd-64b1-45cc-b489-25079db22ddb", 00:12:42.020 "strip_size_kb": 0, 00:12:42.020 "state": "configuring", 00:12:42.020 "raid_level": "raid1", 00:12:42.020 "superblock": true, 00:12:42.020 "num_base_bdevs": 2, 00:12:42.020 "num_base_bdevs_discovered": 1, 00:12:42.020 "num_base_bdevs_operational": 2, 00:12:42.020 "base_bdevs_list": [ 00:12:42.020 { 00:12:42.020 "name": "BaseBdev1", 00:12:42.020 "uuid": "8a34c890-64a7-43bc-8365-45729c1a907f", 00:12:42.020 "is_configured": true, 00:12:42.020 "data_offset": 2048, 00:12:42.020 "data_size": 63488 00:12:42.020 }, 00:12:42.020 { 00:12:42.020 "name": "BaseBdev2", 00:12:42.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.020 "is_configured": false, 00:12:42.020 "data_offset": 0, 00:12:42.020 "data_size": 0 00:12:42.020 } 00:12:42.020 ] 00:12:42.020 }' 00:12:42.020 12:11:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:42.020 12:11:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.589 12:11:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:42.589 12:11:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.589 12:11:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.589 [2024-11-25 12:11:38.499866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:42.589 [2024-11-25 12:11:38.500168] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:42.589 [2024-11-25 12:11:38.500196] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:42.589 [2024-11-25 12:11:38.500543] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:42.589 BaseBdev2 00:12:42.589 [2024-11-25 12:11:38.500755] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:42.589 [2024-11-25 12:11:38.500785] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:42.589 [2024-11-25 12:11:38.500969] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:42.589 12:11:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.589 12:11:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:42.589 12:11:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:42.589 12:11:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:42.589 12:11:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:42.589 12:11:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:42.589 12:11:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:42.589 12:11:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:42.589 12:11:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.589 12:11:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.589 12:11:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.589 12:11:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:42.589 12:11:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.589 12:11:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.589 [ 00:12:42.589 { 00:12:42.589 "name": "BaseBdev2", 00:12:42.589 "aliases": [ 00:12:42.589 "4b4d4e1d-0671-4367-8d0f-8306ae5ad62b" 00:12:42.589 ], 00:12:42.589 "product_name": "Malloc disk", 00:12:42.589 "block_size": 512, 00:12:42.589 "num_blocks": 65536, 00:12:42.589 "uuid": "4b4d4e1d-0671-4367-8d0f-8306ae5ad62b", 00:12:42.589 "assigned_rate_limits": { 00:12:42.589 "rw_ios_per_sec": 0, 00:12:42.589 "rw_mbytes_per_sec": 0, 00:12:42.589 "r_mbytes_per_sec": 0, 00:12:42.589 "w_mbytes_per_sec": 0 00:12:42.589 }, 00:12:42.589 "claimed": true, 00:12:42.589 "claim_type": "exclusive_write", 00:12:42.589 "zoned": false, 00:12:42.589 "supported_io_types": { 00:12:42.589 "read": true, 00:12:42.589 "write": true, 00:12:42.589 "unmap": true, 00:12:42.589 "flush": true, 00:12:42.589 "reset": true, 00:12:42.589 "nvme_admin": false, 00:12:42.589 "nvme_io": false, 00:12:42.589 "nvme_io_md": false, 00:12:42.589 "write_zeroes": true, 00:12:42.589 "zcopy": true, 00:12:42.589 "get_zone_info": false, 00:12:42.589 "zone_management": false, 00:12:42.589 "zone_append": false, 00:12:42.589 "compare": false, 00:12:42.589 "compare_and_write": false, 00:12:42.589 "abort": true, 00:12:42.589 "seek_hole": false, 00:12:42.589 "seek_data": false, 00:12:42.589 "copy": true, 00:12:42.589 "nvme_iov_md": false 00:12:42.589 }, 00:12:42.589 "memory_domains": [ 00:12:42.589 { 00:12:42.589 "dma_device_id": "system", 00:12:42.589 "dma_device_type": 1 00:12:42.589 }, 00:12:42.589 { 00:12:42.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.589 "dma_device_type": 2 00:12:42.589 } 00:12:42.589 ], 00:12:42.589 "driver_specific": {} 00:12:42.589 } 00:12:42.589 ] 00:12:42.589 12:11:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.589 12:11:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:42.589 12:11:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:42.589 12:11:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:42.589 12:11:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:12:42.589 12:11:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:42.589 12:11:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:42.589 12:11:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:42.589 12:11:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:42.589 12:11:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:42.589 12:11:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:42.589 12:11:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:42.589 12:11:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:42.589 12:11:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:42.589 12:11:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.589 12:11:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.589 12:11:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.589 12:11:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:42.589 12:11:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.589 12:11:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:42.589 "name": "Existed_Raid", 00:12:42.589 "uuid": "1e2412bd-64b1-45cc-b489-25079db22ddb", 00:12:42.589 "strip_size_kb": 0, 00:12:42.589 "state": "online", 00:12:42.589 "raid_level": "raid1", 00:12:42.589 "superblock": true, 00:12:42.589 "num_base_bdevs": 2, 00:12:42.589 "num_base_bdevs_discovered": 2, 00:12:42.589 "num_base_bdevs_operational": 2, 00:12:42.589 "base_bdevs_list": [ 00:12:42.589 { 00:12:42.589 "name": "BaseBdev1", 00:12:42.589 "uuid": "8a34c890-64a7-43bc-8365-45729c1a907f", 00:12:42.589 "is_configured": true, 00:12:42.589 "data_offset": 2048, 00:12:42.589 "data_size": 63488 00:12:42.589 }, 00:12:42.589 { 00:12:42.589 "name": "BaseBdev2", 00:12:42.589 "uuid": "4b4d4e1d-0671-4367-8d0f-8306ae5ad62b", 00:12:42.589 "is_configured": true, 00:12:42.589 "data_offset": 2048, 00:12:42.589 "data_size": 63488 00:12:42.589 } 00:12:42.589 ] 00:12:42.589 }' 00:12:42.589 12:11:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:42.589 12:11:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.158 12:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:43.158 12:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:43.158 12:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:43.158 12:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:43.158 12:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:43.158 12:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:43.158 12:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:43.158 12:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:43.158 12:11:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.158 12:11:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.158 [2024-11-25 12:11:39.052422] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:43.158 12:11:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.158 12:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:43.158 "name": "Existed_Raid", 00:12:43.158 "aliases": [ 00:12:43.158 "1e2412bd-64b1-45cc-b489-25079db22ddb" 00:12:43.158 ], 00:12:43.158 "product_name": "Raid Volume", 00:12:43.158 "block_size": 512, 00:12:43.158 "num_blocks": 63488, 00:12:43.158 "uuid": "1e2412bd-64b1-45cc-b489-25079db22ddb", 00:12:43.158 "assigned_rate_limits": { 00:12:43.158 "rw_ios_per_sec": 0, 00:12:43.158 "rw_mbytes_per_sec": 0, 00:12:43.158 "r_mbytes_per_sec": 0, 00:12:43.158 "w_mbytes_per_sec": 0 00:12:43.158 }, 00:12:43.158 "claimed": false, 00:12:43.158 "zoned": false, 00:12:43.158 "supported_io_types": { 00:12:43.158 "read": true, 00:12:43.158 "write": true, 00:12:43.158 "unmap": false, 00:12:43.159 "flush": false, 00:12:43.159 "reset": true, 00:12:43.159 "nvme_admin": false, 00:12:43.159 "nvme_io": false, 00:12:43.159 "nvme_io_md": false, 00:12:43.159 "write_zeroes": true, 00:12:43.159 "zcopy": false, 00:12:43.159 "get_zone_info": false, 00:12:43.159 "zone_management": false, 00:12:43.159 "zone_append": false, 00:12:43.159 "compare": false, 00:12:43.159 "compare_and_write": false, 00:12:43.159 "abort": false, 00:12:43.159 "seek_hole": false, 00:12:43.159 "seek_data": false, 00:12:43.159 "copy": false, 00:12:43.159 "nvme_iov_md": false 00:12:43.159 }, 00:12:43.159 "memory_domains": [ 00:12:43.159 { 00:12:43.159 "dma_device_id": "system", 00:12:43.159 "dma_device_type": 1 00:12:43.159 }, 00:12:43.159 { 00:12:43.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.159 "dma_device_type": 2 00:12:43.159 }, 00:12:43.159 { 00:12:43.159 "dma_device_id": "system", 00:12:43.159 "dma_device_type": 1 00:12:43.159 }, 00:12:43.159 { 00:12:43.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.159 "dma_device_type": 2 00:12:43.159 } 00:12:43.159 ], 00:12:43.159 "driver_specific": { 00:12:43.159 "raid": { 00:12:43.159 "uuid": "1e2412bd-64b1-45cc-b489-25079db22ddb", 00:12:43.159 "strip_size_kb": 0, 00:12:43.159 "state": "online", 00:12:43.159 "raid_level": "raid1", 00:12:43.159 "superblock": true, 00:12:43.159 "num_base_bdevs": 2, 00:12:43.159 "num_base_bdevs_discovered": 2, 00:12:43.159 "num_base_bdevs_operational": 2, 00:12:43.159 "base_bdevs_list": [ 00:12:43.159 { 00:12:43.159 "name": "BaseBdev1", 00:12:43.159 "uuid": "8a34c890-64a7-43bc-8365-45729c1a907f", 00:12:43.159 "is_configured": true, 00:12:43.159 "data_offset": 2048, 00:12:43.159 "data_size": 63488 00:12:43.159 }, 00:12:43.159 { 00:12:43.159 "name": "BaseBdev2", 00:12:43.159 "uuid": "4b4d4e1d-0671-4367-8d0f-8306ae5ad62b", 00:12:43.159 "is_configured": true, 00:12:43.159 "data_offset": 2048, 00:12:43.159 "data_size": 63488 00:12:43.159 } 00:12:43.159 ] 00:12:43.159 } 00:12:43.159 } 00:12:43.159 }' 00:12:43.159 12:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:43.159 12:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:43.159 BaseBdev2' 00:12:43.159 12:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:43.159 12:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:43.159 12:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:43.159 12:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:43.159 12:11:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.159 12:11:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.159 12:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:43.159 12:11:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.159 12:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:43.159 12:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:43.159 12:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:43.417 12:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:43.417 12:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:43.417 12:11:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.417 12:11:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.417 12:11:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.417 12:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:43.417 12:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:43.417 12:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:43.417 12:11:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.417 12:11:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.417 [2024-11-25 12:11:39.300149] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:43.417 12:11:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.417 12:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:43.417 12:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:43.417 12:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:43.417 12:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:12:43.417 12:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:43.417 12:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:12:43.417 12:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:43.417 12:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:43.417 12:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:43.417 12:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:43.417 12:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:43.417 12:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.417 12:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.417 12:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.417 12:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.417 12:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.417 12:11:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.417 12:11:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.417 12:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:43.417 12:11:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.417 12:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.417 "name": "Existed_Raid", 00:12:43.417 "uuid": "1e2412bd-64b1-45cc-b489-25079db22ddb", 00:12:43.417 "strip_size_kb": 0, 00:12:43.417 "state": "online", 00:12:43.417 "raid_level": "raid1", 00:12:43.417 "superblock": true, 00:12:43.417 "num_base_bdevs": 2, 00:12:43.417 "num_base_bdevs_discovered": 1, 00:12:43.417 "num_base_bdevs_operational": 1, 00:12:43.417 "base_bdevs_list": [ 00:12:43.417 { 00:12:43.417 "name": null, 00:12:43.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.417 "is_configured": false, 00:12:43.417 "data_offset": 0, 00:12:43.417 "data_size": 63488 00:12:43.417 }, 00:12:43.417 { 00:12:43.417 "name": "BaseBdev2", 00:12:43.417 "uuid": "4b4d4e1d-0671-4367-8d0f-8306ae5ad62b", 00:12:43.417 "is_configured": true, 00:12:43.417 "data_offset": 2048, 00:12:43.417 "data_size": 63488 00:12:43.417 } 00:12:43.417 ] 00:12:43.417 }' 00:12:43.417 12:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.417 12:11:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.985 12:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:43.985 12:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:43.985 12:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:43.985 12:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.985 12:11:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.985 12:11:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.985 12:11:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.985 12:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:43.985 12:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:43.985 12:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:43.985 12:11:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.985 12:11:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.985 [2024-11-25 12:11:39.971922] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:43.985 [2024-11-25 12:11:39.972051] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:43.985 [2024-11-25 12:11:40.056602] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:43.985 [2024-11-25 12:11:40.056673] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:43.985 [2024-11-25 12:11:40.056694] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:43.985 12:11:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.985 12:11:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:43.985 12:11:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:43.985 12:11:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.985 12:11:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:43.985 12:11:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.985 12:11:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.985 12:11:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.244 12:11:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:44.244 12:11:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:44.244 12:11:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:12:44.244 12:11:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62956 00:12:44.244 12:11:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62956 ']' 00:12:44.244 12:11:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62956 00:12:44.244 12:11:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:44.244 12:11:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:44.244 12:11:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62956 00:12:44.244 12:11:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:44.244 12:11:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:44.244 killing process with pid 62956 00:12:44.244 12:11:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62956' 00:12:44.244 12:11:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62956 00:12:44.244 [2024-11-25 12:11:40.145215] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:44.244 12:11:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62956 00:12:44.244 [2024-11-25 12:11:40.159948] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:45.231 12:11:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:45.231 00:12:45.231 real 0m5.448s 00:12:45.231 user 0m8.292s 00:12:45.231 sys 0m0.735s 00:12:45.231 12:11:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:45.231 ************************************ 00:12:45.231 END TEST raid_state_function_test_sb 00:12:45.231 ************************************ 00:12:45.231 12:11:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.231 12:11:41 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:12:45.231 12:11:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:45.231 12:11:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:45.231 12:11:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:45.231 ************************************ 00:12:45.231 START TEST raid_superblock_test 00:12:45.231 ************************************ 00:12:45.231 12:11:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:12:45.231 12:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:12:45.231 12:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:12:45.231 12:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:45.231 12:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:45.231 12:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:45.231 12:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:45.231 12:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:45.231 12:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:45.231 12:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:45.231 12:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:45.231 12:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:45.231 12:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:45.231 12:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:45.231 12:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:12:45.231 12:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:12:45.231 12:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63214 00:12:45.231 12:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63214 00:12:45.231 12:11:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63214 ']' 00:12:45.231 12:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:45.231 12:11:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:45.231 12:11:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:45.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:45.231 12:11:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:45.231 12:11:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:45.231 12:11:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.489 [2024-11-25 12:11:41.328830] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:12:45.489 [2024-11-25 12:11:41.329006] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63214 ] 00:12:45.489 [2024-11-25 12:11:41.505137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:45.747 [2024-11-25 12:11:41.631552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.747 [2024-11-25 12:11:41.832022] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:45.747 [2024-11-25 12:11:41.832080] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:46.313 12:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:46.313 12:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:46.313 12:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:46.313 12:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:46.313 12:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:46.313 12:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:46.313 12:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:46.314 12:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:46.314 12:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:46.314 12:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:46.314 12:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:46.314 12:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.314 12:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.314 malloc1 00:12:46.314 12:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.314 12:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:46.314 12:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.314 12:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.572 [2024-11-25 12:11:42.403971] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:46.572 [2024-11-25 12:11:42.404055] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:46.572 [2024-11-25 12:11:42.404089] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:46.572 [2024-11-25 12:11:42.404106] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:46.572 [2024-11-25 12:11:42.406905] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:46.572 [2024-11-25 12:11:42.406950] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:46.572 pt1 00:12:46.572 12:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.572 12:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:46.572 12:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:46.572 12:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:46.572 12:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:46.572 12:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:46.572 12:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:46.572 12:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:46.572 12:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:46.572 12:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:46.572 12:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.572 12:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.572 malloc2 00:12:46.572 12:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.572 12:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:46.572 12:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.572 12:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.572 [2024-11-25 12:11:42.451659] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:46.572 [2024-11-25 12:11:42.451726] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:46.572 [2024-11-25 12:11:42.451759] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:46.572 [2024-11-25 12:11:42.451773] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:46.572 [2024-11-25 12:11:42.454471] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:46.572 [2024-11-25 12:11:42.454515] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:46.572 pt2 00:12:46.572 12:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.572 12:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:46.572 12:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:46.573 12:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:12:46.573 12:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.573 12:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.573 [2024-11-25 12:11:42.459732] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:46.573 [2024-11-25 12:11:42.462124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:46.573 [2024-11-25 12:11:42.462357] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:46.573 [2024-11-25 12:11:42.462382] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:46.573 [2024-11-25 12:11:42.462681] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:46.573 [2024-11-25 12:11:42.462894] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:46.573 [2024-11-25 12:11:42.462928] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:46.573 [2024-11-25 12:11:42.463114] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:46.573 12:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.573 12:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:46.573 12:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:46.573 12:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:46.573 12:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:46.573 12:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:46.573 12:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:46.573 12:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.573 12:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.573 12:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.573 12:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.573 12:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.573 12:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.573 12:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.573 12:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.573 12:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.573 12:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.573 "name": "raid_bdev1", 00:12:46.573 "uuid": "49167b7a-bd91-4a87-8568-6a9d5b85bd76", 00:12:46.573 "strip_size_kb": 0, 00:12:46.573 "state": "online", 00:12:46.573 "raid_level": "raid1", 00:12:46.573 "superblock": true, 00:12:46.573 "num_base_bdevs": 2, 00:12:46.573 "num_base_bdevs_discovered": 2, 00:12:46.573 "num_base_bdevs_operational": 2, 00:12:46.573 "base_bdevs_list": [ 00:12:46.573 { 00:12:46.573 "name": "pt1", 00:12:46.573 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:46.573 "is_configured": true, 00:12:46.573 "data_offset": 2048, 00:12:46.573 "data_size": 63488 00:12:46.573 }, 00:12:46.573 { 00:12:46.573 "name": "pt2", 00:12:46.573 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:46.573 "is_configured": true, 00:12:46.573 "data_offset": 2048, 00:12:46.573 "data_size": 63488 00:12:46.573 } 00:12:46.573 ] 00:12:46.573 }' 00:12:46.573 12:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.573 12:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.141 12:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:47.141 12:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:47.141 12:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:47.141 12:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:47.141 12:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:47.141 12:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:47.141 12:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:47.141 12:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.141 12:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.141 12:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:47.141 [2024-11-25 12:11:42.960183] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:47.141 12:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.141 12:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:47.141 "name": "raid_bdev1", 00:12:47.141 "aliases": [ 00:12:47.141 "49167b7a-bd91-4a87-8568-6a9d5b85bd76" 00:12:47.141 ], 00:12:47.141 "product_name": "Raid Volume", 00:12:47.141 "block_size": 512, 00:12:47.141 "num_blocks": 63488, 00:12:47.141 "uuid": "49167b7a-bd91-4a87-8568-6a9d5b85bd76", 00:12:47.141 "assigned_rate_limits": { 00:12:47.141 "rw_ios_per_sec": 0, 00:12:47.141 "rw_mbytes_per_sec": 0, 00:12:47.141 "r_mbytes_per_sec": 0, 00:12:47.141 "w_mbytes_per_sec": 0 00:12:47.141 }, 00:12:47.141 "claimed": false, 00:12:47.141 "zoned": false, 00:12:47.141 "supported_io_types": { 00:12:47.141 "read": true, 00:12:47.141 "write": true, 00:12:47.141 "unmap": false, 00:12:47.141 "flush": false, 00:12:47.141 "reset": true, 00:12:47.141 "nvme_admin": false, 00:12:47.141 "nvme_io": false, 00:12:47.141 "nvme_io_md": false, 00:12:47.141 "write_zeroes": true, 00:12:47.141 "zcopy": false, 00:12:47.141 "get_zone_info": false, 00:12:47.141 "zone_management": false, 00:12:47.141 "zone_append": false, 00:12:47.141 "compare": false, 00:12:47.141 "compare_and_write": false, 00:12:47.141 "abort": false, 00:12:47.141 "seek_hole": false, 00:12:47.141 "seek_data": false, 00:12:47.141 "copy": false, 00:12:47.141 "nvme_iov_md": false 00:12:47.141 }, 00:12:47.141 "memory_domains": [ 00:12:47.141 { 00:12:47.141 "dma_device_id": "system", 00:12:47.141 "dma_device_type": 1 00:12:47.141 }, 00:12:47.141 { 00:12:47.141 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.141 "dma_device_type": 2 00:12:47.141 }, 00:12:47.141 { 00:12:47.141 "dma_device_id": "system", 00:12:47.141 "dma_device_type": 1 00:12:47.141 }, 00:12:47.141 { 00:12:47.141 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.141 "dma_device_type": 2 00:12:47.141 } 00:12:47.141 ], 00:12:47.141 "driver_specific": { 00:12:47.141 "raid": { 00:12:47.141 "uuid": "49167b7a-bd91-4a87-8568-6a9d5b85bd76", 00:12:47.141 "strip_size_kb": 0, 00:12:47.141 "state": "online", 00:12:47.141 "raid_level": "raid1", 00:12:47.141 "superblock": true, 00:12:47.141 "num_base_bdevs": 2, 00:12:47.141 "num_base_bdevs_discovered": 2, 00:12:47.141 "num_base_bdevs_operational": 2, 00:12:47.141 "base_bdevs_list": [ 00:12:47.141 { 00:12:47.141 "name": "pt1", 00:12:47.141 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:47.141 "is_configured": true, 00:12:47.141 "data_offset": 2048, 00:12:47.141 "data_size": 63488 00:12:47.141 }, 00:12:47.141 { 00:12:47.141 "name": "pt2", 00:12:47.141 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:47.141 "is_configured": true, 00:12:47.141 "data_offset": 2048, 00:12:47.141 "data_size": 63488 00:12:47.141 } 00:12:47.141 ] 00:12:47.141 } 00:12:47.141 } 00:12:47.141 }' 00:12:47.141 12:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:47.141 12:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:47.141 pt2' 00:12:47.141 12:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:47.141 12:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:47.141 12:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:47.141 12:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:47.141 12:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.141 12:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:47.141 12:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.141 12:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.141 12:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:47.141 12:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:47.141 12:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:47.141 12:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:47.141 12:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:47.141 12:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.141 12:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.141 12:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.141 12:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:47.141 12:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:47.141 12:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:47.141 12:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:47.141 12:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.141 12:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.141 [2024-11-25 12:11:43.212259] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:47.399 12:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.399 12:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=49167b7a-bd91-4a87-8568-6a9d5b85bd76 00:12:47.399 12:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 49167b7a-bd91-4a87-8568-6a9d5b85bd76 ']' 00:12:47.399 12:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:47.399 12:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.399 12:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.399 [2024-11-25 12:11:43.259895] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:47.399 [2024-11-25 12:11:43.260056] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:47.399 [2024-11-25 12:11:43.260280] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:47.399 [2024-11-25 12:11:43.260481] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:47.399 [2024-11-25 12:11:43.260640] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:47.399 12:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.399 12:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.399 12:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.399 12:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:47.399 12:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.399 12:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.399 12:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:47.399 12:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:47.399 12:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:47.399 12:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:47.399 12:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.399 12:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.399 12:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.399 12:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:47.399 12:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:47.399 12:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.399 12:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.399 12:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.399 12:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:47.399 12:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:47.399 12:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.399 12:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.399 12:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.399 12:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:47.399 12:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:12:47.399 12:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:47.399 12:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:12:47.399 12:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:47.399 12:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:47.399 12:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:47.399 12:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:47.399 12:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:12:47.399 12:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.399 12:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.399 [2024-11-25 12:11:43.399962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:47.399 [2024-11-25 12:11:43.402448] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:47.399 [2024-11-25 12:11:43.402536] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:47.399 [2024-11-25 12:11:43.402626] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:47.399 [2024-11-25 12:11:43.402653] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:47.399 [2024-11-25 12:11:43.402668] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:47.399 request: 00:12:47.399 { 00:12:47.399 "name": "raid_bdev1", 00:12:47.399 "raid_level": "raid1", 00:12:47.399 "base_bdevs": [ 00:12:47.399 "malloc1", 00:12:47.399 "malloc2" 00:12:47.399 ], 00:12:47.399 "superblock": false, 00:12:47.399 "method": "bdev_raid_create", 00:12:47.399 "req_id": 1 00:12:47.399 } 00:12:47.399 Got JSON-RPC error response 00:12:47.399 response: 00:12:47.399 { 00:12:47.399 "code": -17, 00:12:47.399 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:47.399 } 00:12:47.399 12:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:47.399 12:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:47.399 12:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:47.399 12:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:47.399 12:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:47.399 12:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.399 12:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:47.399 12:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.399 12:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.399 12:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.399 12:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:47.399 12:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:47.400 12:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:47.400 12:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.400 12:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.400 [2024-11-25 12:11:43.471948] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:47.400 [2024-11-25 12:11:43.472025] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:47.400 [2024-11-25 12:11:43.472053] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:47.400 [2024-11-25 12:11:43.472071] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:47.400 [2024-11-25 12:11:43.474917] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:47.400 [2024-11-25 12:11:43.474966] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:47.400 [2024-11-25 12:11:43.475069] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:47.400 [2024-11-25 12:11:43.475148] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:47.400 pt1 00:12:47.400 12:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.400 12:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:12:47.400 12:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:47.400 12:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:47.400 12:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:47.400 12:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:47.400 12:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:47.400 12:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.400 12:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.400 12:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.400 12:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.400 12:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.400 12:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.400 12:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.400 12:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.658 12:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.658 12:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.658 "name": "raid_bdev1", 00:12:47.658 "uuid": "49167b7a-bd91-4a87-8568-6a9d5b85bd76", 00:12:47.658 "strip_size_kb": 0, 00:12:47.658 "state": "configuring", 00:12:47.658 "raid_level": "raid1", 00:12:47.658 "superblock": true, 00:12:47.658 "num_base_bdevs": 2, 00:12:47.658 "num_base_bdevs_discovered": 1, 00:12:47.658 "num_base_bdevs_operational": 2, 00:12:47.658 "base_bdevs_list": [ 00:12:47.658 { 00:12:47.658 "name": "pt1", 00:12:47.658 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:47.658 "is_configured": true, 00:12:47.658 "data_offset": 2048, 00:12:47.658 "data_size": 63488 00:12:47.658 }, 00:12:47.658 { 00:12:47.658 "name": null, 00:12:47.658 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:47.658 "is_configured": false, 00:12:47.658 "data_offset": 2048, 00:12:47.658 "data_size": 63488 00:12:47.658 } 00:12:47.658 ] 00:12:47.658 }' 00:12:47.658 12:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.658 12:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.917 12:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:12:47.917 12:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:47.917 12:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:47.917 12:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:47.917 12:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.917 12:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.917 [2024-11-25 12:11:43.980127] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:47.917 [2024-11-25 12:11:43.980364] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:47.917 [2024-11-25 12:11:43.980406] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:47.917 [2024-11-25 12:11:43.980426] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:47.917 [2024-11-25 12:11:43.981117] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:47.917 [2024-11-25 12:11:43.981161] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:47.917 [2024-11-25 12:11:43.981269] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:47.917 [2024-11-25 12:11:43.981308] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:47.917 [2024-11-25 12:11:43.981477] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:47.917 [2024-11-25 12:11:43.981498] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:47.917 [2024-11-25 12:11:43.981796] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:47.917 [2024-11-25 12:11:43.982010] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:47.917 [2024-11-25 12:11:43.982042] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:47.917 [2024-11-25 12:11:43.982215] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:47.917 pt2 00:12:47.917 12:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.918 12:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:47.918 12:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:47.918 12:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:47.918 12:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:47.918 12:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:47.918 12:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:47.918 12:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:47.918 12:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:47.918 12:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.918 12:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.918 12:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.918 12:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.918 12:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.918 12:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.918 12:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.918 12:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.918 12:11:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.177 12:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.177 "name": "raid_bdev1", 00:12:48.177 "uuid": "49167b7a-bd91-4a87-8568-6a9d5b85bd76", 00:12:48.177 "strip_size_kb": 0, 00:12:48.177 "state": "online", 00:12:48.177 "raid_level": "raid1", 00:12:48.177 "superblock": true, 00:12:48.177 "num_base_bdevs": 2, 00:12:48.177 "num_base_bdevs_discovered": 2, 00:12:48.177 "num_base_bdevs_operational": 2, 00:12:48.177 "base_bdevs_list": [ 00:12:48.177 { 00:12:48.177 "name": "pt1", 00:12:48.177 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:48.177 "is_configured": true, 00:12:48.177 "data_offset": 2048, 00:12:48.177 "data_size": 63488 00:12:48.177 }, 00:12:48.177 { 00:12:48.177 "name": "pt2", 00:12:48.177 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:48.177 "is_configured": true, 00:12:48.177 "data_offset": 2048, 00:12:48.177 "data_size": 63488 00:12:48.177 } 00:12:48.177 ] 00:12:48.177 }' 00:12:48.177 12:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.177 12:11:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.744 12:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:48.744 12:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:48.744 12:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:48.744 12:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:48.744 12:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:48.744 12:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:48.745 12:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:48.745 12:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:48.745 12:11:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.745 12:11:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.745 [2024-11-25 12:11:44.648732] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:48.745 12:11:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.745 12:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:48.745 "name": "raid_bdev1", 00:12:48.745 "aliases": [ 00:12:48.745 "49167b7a-bd91-4a87-8568-6a9d5b85bd76" 00:12:48.745 ], 00:12:48.745 "product_name": "Raid Volume", 00:12:48.745 "block_size": 512, 00:12:48.745 "num_blocks": 63488, 00:12:48.745 "uuid": "49167b7a-bd91-4a87-8568-6a9d5b85bd76", 00:12:48.745 "assigned_rate_limits": { 00:12:48.745 "rw_ios_per_sec": 0, 00:12:48.745 "rw_mbytes_per_sec": 0, 00:12:48.745 "r_mbytes_per_sec": 0, 00:12:48.745 "w_mbytes_per_sec": 0 00:12:48.745 }, 00:12:48.745 "claimed": false, 00:12:48.745 "zoned": false, 00:12:48.745 "supported_io_types": { 00:12:48.745 "read": true, 00:12:48.745 "write": true, 00:12:48.745 "unmap": false, 00:12:48.745 "flush": false, 00:12:48.745 "reset": true, 00:12:48.745 "nvme_admin": false, 00:12:48.745 "nvme_io": false, 00:12:48.745 "nvme_io_md": false, 00:12:48.745 "write_zeroes": true, 00:12:48.745 "zcopy": false, 00:12:48.745 "get_zone_info": false, 00:12:48.745 "zone_management": false, 00:12:48.745 "zone_append": false, 00:12:48.745 "compare": false, 00:12:48.745 "compare_and_write": false, 00:12:48.745 "abort": false, 00:12:48.745 "seek_hole": false, 00:12:48.745 "seek_data": false, 00:12:48.745 "copy": false, 00:12:48.745 "nvme_iov_md": false 00:12:48.745 }, 00:12:48.745 "memory_domains": [ 00:12:48.745 { 00:12:48.745 "dma_device_id": "system", 00:12:48.745 "dma_device_type": 1 00:12:48.745 }, 00:12:48.745 { 00:12:48.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.745 "dma_device_type": 2 00:12:48.745 }, 00:12:48.745 { 00:12:48.745 "dma_device_id": "system", 00:12:48.745 "dma_device_type": 1 00:12:48.745 }, 00:12:48.745 { 00:12:48.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.745 "dma_device_type": 2 00:12:48.745 } 00:12:48.745 ], 00:12:48.745 "driver_specific": { 00:12:48.745 "raid": { 00:12:48.745 "uuid": "49167b7a-bd91-4a87-8568-6a9d5b85bd76", 00:12:48.745 "strip_size_kb": 0, 00:12:48.745 "state": "online", 00:12:48.745 "raid_level": "raid1", 00:12:48.745 "superblock": true, 00:12:48.745 "num_base_bdevs": 2, 00:12:48.745 "num_base_bdevs_discovered": 2, 00:12:48.745 "num_base_bdevs_operational": 2, 00:12:48.745 "base_bdevs_list": [ 00:12:48.745 { 00:12:48.745 "name": "pt1", 00:12:48.745 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:48.745 "is_configured": true, 00:12:48.745 "data_offset": 2048, 00:12:48.745 "data_size": 63488 00:12:48.745 }, 00:12:48.745 { 00:12:48.745 "name": "pt2", 00:12:48.745 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:48.745 "is_configured": true, 00:12:48.745 "data_offset": 2048, 00:12:48.745 "data_size": 63488 00:12:48.745 } 00:12:48.745 ] 00:12:48.745 } 00:12:48.745 } 00:12:48.745 }' 00:12:48.745 12:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:48.745 12:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:48.745 pt2' 00:12:48.745 12:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:48.745 12:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:48.745 12:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:48.745 12:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:48.745 12:11:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.745 12:11:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.745 12:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:48.745 12:11:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.003 12:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:49.003 12:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:49.003 12:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:49.003 12:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:49.003 12:11:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.003 12:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:49.003 12:11:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.003 12:11:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.003 12:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:49.003 12:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:49.003 12:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:49.003 12:11:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.003 12:11:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.003 12:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:49.003 [2024-11-25 12:11:44.924720] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:49.003 12:11:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.004 12:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 49167b7a-bd91-4a87-8568-6a9d5b85bd76 '!=' 49167b7a-bd91-4a87-8568-6a9d5b85bd76 ']' 00:12:49.004 12:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:12:49.004 12:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:49.004 12:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:49.004 12:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:12:49.004 12:11:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.004 12:11:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.004 [2024-11-25 12:11:44.980505] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:12:49.004 12:11:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.004 12:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:49.004 12:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:49.004 12:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:49.004 12:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:49.004 12:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:49.004 12:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:49.004 12:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.004 12:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.004 12:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.004 12:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.004 12:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.004 12:11:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.004 12:11:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.004 12:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.004 12:11:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.004 12:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.004 "name": "raid_bdev1", 00:12:49.004 "uuid": "49167b7a-bd91-4a87-8568-6a9d5b85bd76", 00:12:49.004 "strip_size_kb": 0, 00:12:49.004 "state": "online", 00:12:49.004 "raid_level": "raid1", 00:12:49.004 "superblock": true, 00:12:49.004 "num_base_bdevs": 2, 00:12:49.004 "num_base_bdevs_discovered": 1, 00:12:49.004 "num_base_bdevs_operational": 1, 00:12:49.004 "base_bdevs_list": [ 00:12:49.004 { 00:12:49.004 "name": null, 00:12:49.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.004 "is_configured": false, 00:12:49.004 "data_offset": 0, 00:12:49.004 "data_size": 63488 00:12:49.004 }, 00:12:49.004 { 00:12:49.004 "name": "pt2", 00:12:49.004 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:49.004 "is_configured": true, 00:12:49.004 "data_offset": 2048, 00:12:49.004 "data_size": 63488 00:12:49.004 } 00:12:49.004 ] 00:12:49.004 }' 00:12:49.004 12:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.004 12:11:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.569 12:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:49.569 12:11:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.569 12:11:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.569 [2024-11-25 12:11:45.516576] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:49.569 [2024-11-25 12:11:45.516611] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:49.569 [2024-11-25 12:11:45.516711] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:49.569 [2024-11-25 12:11:45.516777] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:49.570 [2024-11-25 12:11:45.516796] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:49.570 12:11:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.570 12:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.570 12:11:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.570 12:11:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.570 12:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:12:49.570 12:11:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.570 12:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:12:49.570 12:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:12:49.570 12:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:12:49.570 12:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:49.570 12:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:12:49.570 12:11:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.570 12:11:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.570 12:11:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.570 12:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:49.570 12:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:49.570 12:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:12:49.570 12:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:49.570 12:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:12:49.570 12:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:49.570 12:11:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.570 12:11:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.570 [2024-11-25 12:11:45.580550] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:49.570 [2024-11-25 12:11:45.580628] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:49.570 [2024-11-25 12:11:45.580656] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:49.570 [2024-11-25 12:11:45.580673] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:49.570 [2024-11-25 12:11:45.583612] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:49.570 [2024-11-25 12:11:45.583662] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:49.570 [2024-11-25 12:11:45.583759] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:49.570 [2024-11-25 12:11:45.583821] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:49.570 [2024-11-25 12:11:45.583948] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:49.570 [2024-11-25 12:11:45.583970] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:49.570 [2024-11-25 12:11:45.584254] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:49.570 pt2 00:12:49.570 [2024-11-25 12:11:45.584527] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:49.570 [2024-11-25 12:11:45.584550] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:49.570 [2024-11-25 12:11:45.584776] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:49.570 12:11:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.570 12:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:49.570 12:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:49.570 12:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:49.570 12:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:49.570 12:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:49.570 12:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:49.570 12:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.570 12:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.570 12:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.570 12:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.570 12:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.570 12:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.570 12:11:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.570 12:11:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.570 12:11:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.570 12:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.570 "name": "raid_bdev1", 00:12:49.570 "uuid": "49167b7a-bd91-4a87-8568-6a9d5b85bd76", 00:12:49.570 "strip_size_kb": 0, 00:12:49.570 "state": "online", 00:12:49.570 "raid_level": "raid1", 00:12:49.570 "superblock": true, 00:12:49.570 "num_base_bdevs": 2, 00:12:49.570 "num_base_bdevs_discovered": 1, 00:12:49.570 "num_base_bdevs_operational": 1, 00:12:49.570 "base_bdevs_list": [ 00:12:49.570 { 00:12:49.570 "name": null, 00:12:49.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.570 "is_configured": false, 00:12:49.570 "data_offset": 2048, 00:12:49.570 "data_size": 63488 00:12:49.570 }, 00:12:49.570 { 00:12:49.570 "name": "pt2", 00:12:49.570 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:49.570 "is_configured": true, 00:12:49.570 "data_offset": 2048, 00:12:49.570 "data_size": 63488 00:12:49.570 } 00:12:49.570 ] 00:12:49.570 }' 00:12:49.570 12:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.570 12:11:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.160 12:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:50.160 12:11:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.160 12:11:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.160 [2024-11-25 12:11:46.052808] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:50.160 [2024-11-25 12:11:46.052844] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:50.160 [2024-11-25 12:11:46.052937] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:50.160 [2024-11-25 12:11:46.053008] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:50.160 [2024-11-25 12:11:46.053023] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:50.160 12:11:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.160 12:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.160 12:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:12:50.160 12:11:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.160 12:11:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.160 12:11:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.160 12:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:12:50.160 12:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:12:50.160 12:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:12:50.160 12:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:50.160 12:11:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.160 12:11:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.160 [2024-11-25 12:11:46.116826] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:50.160 [2024-11-25 12:11:46.117008] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:50.160 [2024-11-25 12:11:46.117083] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:12:50.160 [2024-11-25 12:11:46.117294] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:50.160 [2024-11-25 12:11:46.120157] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:50.160 [2024-11-25 12:11:46.120203] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:50.160 [2024-11-25 12:11:46.120302] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:50.160 [2024-11-25 12:11:46.120379] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:50.160 [2024-11-25 12:11:46.120555] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:12:50.160 [2024-11-25 12:11:46.120572] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:50.160 [2024-11-25 12:11:46.120593] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:12:50.160 [2024-11-25 12:11:46.120667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:50.160 [2024-11-25 12:11:46.120768] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:12:50.160 [2024-11-25 12:11:46.120783] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:50.160 [2024-11-25 12:11:46.121097] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:50.160 [2024-11-25 12:11:46.121285] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:12:50.160 [2024-11-25 12:11:46.121305] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:12:50.160 pt1 00:12:50.160 [2024-11-25 12:11:46.121538] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:50.160 12:11:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.160 12:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:12:50.160 12:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:50.160 12:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:50.160 12:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:50.160 12:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:50.160 12:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:50.160 12:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:50.160 12:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.160 12:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.160 12:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.160 12:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.160 12:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.160 12:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.160 12:11:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.160 12:11:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.160 12:11:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.160 12:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.160 "name": "raid_bdev1", 00:12:50.160 "uuid": "49167b7a-bd91-4a87-8568-6a9d5b85bd76", 00:12:50.160 "strip_size_kb": 0, 00:12:50.160 "state": "online", 00:12:50.160 "raid_level": "raid1", 00:12:50.160 "superblock": true, 00:12:50.160 "num_base_bdevs": 2, 00:12:50.160 "num_base_bdevs_discovered": 1, 00:12:50.160 "num_base_bdevs_operational": 1, 00:12:50.160 "base_bdevs_list": [ 00:12:50.160 { 00:12:50.160 "name": null, 00:12:50.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.160 "is_configured": false, 00:12:50.160 "data_offset": 2048, 00:12:50.160 "data_size": 63488 00:12:50.160 }, 00:12:50.160 { 00:12:50.160 "name": "pt2", 00:12:50.160 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:50.160 "is_configured": true, 00:12:50.160 "data_offset": 2048, 00:12:50.160 "data_size": 63488 00:12:50.160 } 00:12:50.160 ] 00:12:50.160 }' 00:12:50.160 12:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.160 12:11:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.725 12:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:12:50.725 12:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:50.725 12:11:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.725 12:11:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.725 12:11:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.725 12:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:12:50.725 12:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:50.725 12:11:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.725 12:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:12:50.725 12:11:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.725 [2024-11-25 12:11:46.637288] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:50.725 12:11:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.725 12:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 49167b7a-bd91-4a87-8568-6a9d5b85bd76 '!=' 49167b7a-bd91-4a87-8568-6a9d5b85bd76 ']' 00:12:50.725 12:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63214 00:12:50.725 12:11:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63214 ']' 00:12:50.725 12:11:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63214 00:12:50.725 12:11:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:50.725 12:11:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:50.725 12:11:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63214 00:12:50.725 killing process with pid 63214 00:12:50.725 12:11:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:50.725 12:11:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:50.725 12:11:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63214' 00:12:50.725 12:11:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63214 00:12:50.725 [2024-11-25 12:11:46.716307] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:50.725 [2024-11-25 12:11:46.716428] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:50.725 12:11:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63214 00:12:50.725 [2024-11-25 12:11:46.716493] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:50.725 [2024-11-25 12:11:46.716516] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:12:50.982 [2024-11-25 12:11:46.899077] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:51.914 12:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:51.915 00:12:51.915 real 0m6.676s 00:12:51.915 user 0m10.695s 00:12:51.915 sys 0m0.887s 00:12:51.915 12:11:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:51.915 ************************************ 00:12:51.915 END TEST raid_superblock_test 00:12:51.915 ************************************ 00:12:51.915 12:11:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.915 12:11:47 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:12:51.915 12:11:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:51.915 12:11:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:51.915 12:11:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:51.915 ************************************ 00:12:51.915 START TEST raid_read_error_test 00:12:51.915 ************************************ 00:12:51.915 12:11:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:12:51.915 12:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:51.915 12:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:12:51.915 12:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:51.915 12:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:51.915 12:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:51.915 12:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:51.915 12:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:51.915 12:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:51.915 12:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:51.915 12:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:51.915 12:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:51.915 12:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:51.915 12:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:51.915 12:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:51.915 12:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:51.915 12:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:51.915 12:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:51.915 12:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:51.915 12:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:51.915 12:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:51.915 12:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:51.915 12:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.aU5bGLYFM7 00:12:51.915 12:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63546 00:12:51.915 12:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63546 00:12:51.915 12:11:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63546 ']' 00:12:51.915 12:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:51.915 12:11:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.915 12:11:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:51.915 12:11:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.915 12:11:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:51.915 12:11:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.173 [2024-11-25 12:11:48.066681] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:12:52.173 [2024-11-25 12:11:48.066847] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63546 ] 00:12:52.173 [2024-11-25 12:11:48.249131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:52.432 [2024-11-25 12:11:48.401983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.752 [2024-11-25 12:11:48.621790] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:52.752 [2024-11-25 12:11:48.621849] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:53.026 12:11:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:53.026 12:11:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:53.026 12:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:53.026 12:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:53.026 12:11:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.026 12:11:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.283 BaseBdev1_malloc 00:12:53.283 12:11:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.283 12:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:53.283 12:11:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.283 12:11:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.283 true 00:12:53.283 12:11:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.283 12:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:53.283 12:11:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.283 12:11:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.283 [2024-11-25 12:11:49.131986] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:53.283 [2024-11-25 12:11:49.132060] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:53.283 [2024-11-25 12:11:49.132094] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:53.283 [2024-11-25 12:11:49.132113] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:53.283 [2024-11-25 12:11:49.134995] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:53.283 [2024-11-25 12:11:49.135047] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:53.283 BaseBdev1 00:12:53.283 12:11:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.283 12:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:53.283 12:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:53.283 12:11:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.283 12:11:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.283 BaseBdev2_malloc 00:12:53.283 12:11:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.283 12:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:53.283 12:11:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.283 12:11:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.283 true 00:12:53.283 12:11:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.283 12:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:53.283 12:11:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.283 12:11:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.283 [2024-11-25 12:11:49.188795] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:53.283 [2024-11-25 12:11:49.188864] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:53.283 [2024-11-25 12:11:49.188894] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:53.283 [2024-11-25 12:11:49.188912] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:53.283 [2024-11-25 12:11:49.191684] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:53.283 [2024-11-25 12:11:49.191888] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:53.283 BaseBdev2 00:12:53.283 12:11:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.283 12:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:12:53.283 12:11:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.283 12:11:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.283 [2024-11-25 12:11:49.196876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:53.283 [2024-11-25 12:11:49.199294] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:53.283 [2024-11-25 12:11:49.199708] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:53.283 [2024-11-25 12:11:49.199738] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:53.283 [2024-11-25 12:11:49.200025] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:53.283 [2024-11-25 12:11:49.200257] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:53.283 [2024-11-25 12:11:49.200274] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:53.283 [2024-11-25 12:11:49.200479] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:53.283 12:11:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.283 12:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:53.283 12:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:53.283 12:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:53.283 12:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:53.283 12:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:53.283 12:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:53.283 12:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.283 12:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.283 12:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.283 12:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.283 12:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.283 12:11:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.283 12:11:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.283 12:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:53.283 12:11:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.283 12:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.283 "name": "raid_bdev1", 00:12:53.283 "uuid": "49b611c5-a2d4-413f-b636-98fe9ed9bf2e", 00:12:53.283 "strip_size_kb": 0, 00:12:53.283 "state": "online", 00:12:53.283 "raid_level": "raid1", 00:12:53.283 "superblock": true, 00:12:53.283 "num_base_bdevs": 2, 00:12:53.283 "num_base_bdevs_discovered": 2, 00:12:53.283 "num_base_bdevs_operational": 2, 00:12:53.283 "base_bdevs_list": [ 00:12:53.283 { 00:12:53.283 "name": "BaseBdev1", 00:12:53.283 "uuid": "5ac6b979-5785-52a6-818e-10e363b09db2", 00:12:53.283 "is_configured": true, 00:12:53.283 "data_offset": 2048, 00:12:53.283 "data_size": 63488 00:12:53.283 }, 00:12:53.283 { 00:12:53.283 "name": "BaseBdev2", 00:12:53.283 "uuid": "89447a02-4037-5ace-b730-90e9d9f28f1b", 00:12:53.283 "is_configured": true, 00:12:53.283 "data_offset": 2048, 00:12:53.283 "data_size": 63488 00:12:53.283 } 00:12:53.283 ] 00:12:53.283 }' 00:12:53.283 12:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.283 12:11:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.852 12:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:53.852 12:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:53.852 [2024-11-25 12:11:49.842464] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:54.787 12:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:54.787 12:11:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.787 12:11:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.787 12:11:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.787 12:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:54.787 12:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:54.787 12:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:12:54.787 12:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:12:54.788 12:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:54.788 12:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:54.788 12:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:54.788 12:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:54.788 12:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:54.788 12:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:54.788 12:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.788 12:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.788 12:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.788 12:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.788 12:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.788 12:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.788 12:11:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.788 12:11:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.788 12:11:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.788 12:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.788 "name": "raid_bdev1", 00:12:54.788 "uuid": "49b611c5-a2d4-413f-b636-98fe9ed9bf2e", 00:12:54.788 "strip_size_kb": 0, 00:12:54.788 "state": "online", 00:12:54.788 "raid_level": "raid1", 00:12:54.788 "superblock": true, 00:12:54.788 "num_base_bdevs": 2, 00:12:54.788 "num_base_bdevs_discovered": 2, 00:12:54.788 "num_base_bdevs_operational": 2, 00:12:54.788 "base_bdevs_list": [ 00:12:54.788 { 00:12:54.788 "name": "BaseBdev1", 00:12:54.788 "uuid": "5ac6b979-5785-52a6-818e-10e363b09db2", 00:12:54.788 "is_configured": true, 00:12:54.788 "data_offset": 2048, 00:12:54.788 "data_size": 63488 00:12:54.788 }, 00:12:54.788 { 00:12:54.788 "name": "BaseBdev2", 00:12:54.788 "uuid": "89447a02-4037-5ace-b730-90e9d9f28f1b", 00:12:54.788 "is_configured": true, 00:12:54.788 "data_offset": 2048, 00:12:54.788 "data_size": 63488 00:12:54.788 } 00:12:54.788 ] 00:12:54.788 }' 00:12:54.788 12:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.788 12:11:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.354 12:11:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:55.354 12:11:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.354 12:11:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.354 [2024-11-25 12:11:51.273224] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:55.354 [2024-11-25 12:11:51.273426] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:55.354 [2024-11-25 12:11:51.276841] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:55.354 [2024-11-25 12:11:51.277021] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:55.354 [2024-11-25 12:11:51.277232] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:55.354 [2024-11-25 12:11:51.277408] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:55.354 { 00:12:55.354 "results": [ 00:12:55.354 { 00:12:55.354 "job": "raid_bdev1", 00:12:55.354 "core_mask": "0x1", 00:12:55.354 "workload": "randrw", 00:12:55.354 "percentage": 50, 00:12:55.354 "status": "finished", 00:12:55.354 "queue_depth": 1, 00:12:55.354 "io_size": 131072, 00:12:55.354 "runtime": 1.428319, 00:12:55.354 "iops": 12266.867555497056, 00:12:55.354 "mibps": 1533.358444437132, 00:12:55.354 "io_failed": 0, 00:12:55.354 "io_timeout": 0, 00:12:55.354 "avg_latency_us": 77.4773677301524, 00:12:55.354 "min_latency_us": 41.89090909090909, 00:12:55.354 "max_latency_us": 1891.6072727272726 00:12:55.354 } 00:12:55.354 ], 00:12:55.354 "core_count": 1 00:12:55.354 } 00:12:55.354 12:11:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.354 12:11:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63546 00:12:55.354 12:11:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63546 ']' 00:12:55.354 12:11:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63546 00:12:55.354 12:11:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:12:55.354 12:11:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:55.354 12:11:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63546 00:12:55.354 killing process with pid 63546 00:12:55.354 12:11:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:55.354 12:11:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:55.354 12:11:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63546' 00:12:55.354 12:11:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63546 00:12:55.354 [2024-11-25 12:11:51.314796] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:55.354 12:11:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63546 00:12:55.354 [2024-11-25 12:11:51.434121] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:56.729 12:11:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.aU5bGLYFM7 00:12:56.730 12:11:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:56.730 12:11:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:56.730 12:11:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:56.730 12:11:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:56.730 12:11:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:56.730 12:11:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:56.730 12:11:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:56.730 00:12:56.730 real 0m4.564s 00:12:56.730 user 0m5.791s 00:12:56.730 sys 0m0.533s 00:12:56.730 12:11:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:56.730 12:11:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.730 ************************************ 00:12:56.730 END TEST raid_read_error_test 00:12:56.730 ************************************ 00:12:56.730 12:11:52 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:12:56.730 12:11:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:56.730 12:11:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:56.730 12:11:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:56.730 ************************************ 00:12:56.730 START TEST raid_write_error_test 00:12:56.730 ************************************ 00:12:56.730 12:11:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:12:56.730 12:11:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:56.730 12:11:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:12:56.730 12:11:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:56.730 12:11:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:56.730 12:11:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:56.730 12:11:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:56.730 12:11:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:56.730 12:11:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:56.730 12:11:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:56.730 12:11:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:56.730 12:11:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:56.730 12:11:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:56.730 12:11:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:56.730 12:11:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:56.730 12:11:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:56.730 12:11:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:56.730 12:11:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:56.730 12:11:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:56.730 12:11:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:56.730 12:11:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:56.730 12:11:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:56.730 12:11:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.xwL9fMbcgP 00:12:56.730 12:11:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63692 00:12:56.730 12:11:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63692 00:12:56.730 12:11:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:56.730 12:11:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63692 ']' 00:12:56.730 12:11:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:56.730 12:11:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:56.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:56.730 12:11:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:56.730 12:11:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:56.730 12:11:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.730 [2024-11-25 12:11:52.690411] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:12:56.730 [2024-11-25 12:11:52.690580] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63692 ] 00:12:56.994 [2024-11-25 12:11:52.882585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:56.994 [2024-11-25 12:11:53.038955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.269 [2024-11-25 12:11:53.274604] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:57.269 [2024-11-25 12:11:53.274661] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:57.837 12:11:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:57.837 12:11:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:57.837 12:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:57.837 12:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:57.837 12:11:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.837 12:11:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.837 BaseBdev1_malloc 00:12:57.837 12:11:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.837 12:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:57.837 12:11:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.837 12:11:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.837 true 00:12:57.837 12:11:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.837 12:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:57.837 12:11:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.837 12:11:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.837 [2024-11-25 12:11:53.699736] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:57.837 [2024-11-25 12:11:53.699943] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:57.837 [2024-11-25 12:11:53.699986] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:57.838 [2024-11-25 12:11:53.700006] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:57.838 [2024-11-25 12:11:53.702812] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:57.838 [2024-11-25 12:11:53.702860] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:57.838 BaseBdev1 00:12:57.838 12:11:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.838 12:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:57.838 12:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:57.838 12:11:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.838 12:11:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.838 BaseBdev2_malloc 00:12:57.838 12:11:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.838 12:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:57.838 12:11:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.838 12:11:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.838 true 00:12:57.838 12:11:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.838 12:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:57.838 12:11:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.838 12:11:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.838 [2024-11-25 12:11:53.755763] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:57.838 [2024-11-25 12:11:53.755832] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:57.838 [2024-11-25 12:11:53.755861] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:57.838 [2024-11-25 12:11:53.755878] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:57.838 [2024-11-25 12:11:53.758817] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:57.838 [2024-11-25 12:11:53.758986] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:57.838 BaseBdev2 00:12:57.838 12:11:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.838 12:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:12:57.838 12:11:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.838 12:11:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.838 [2024-11-25 12:11:53.763928] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:57.838 [2024-11-25 12:11:53.766432] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:57.838 [2024-11-25 12:11:53.766685] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:57.838 [2024-11-25 12:11:53.766708] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:57.838 [2024-11-25 12:11:53.767004] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:57.838 [2024-11-25 12:11:53.767235] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:57.838 [2024-11-25 12:11:53.767252] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:57.838 [2024-11-25 12:11:53.767511] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:57.838 12:11:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.838 12:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:57.838 12:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:57.838 12:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:57.838 12:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:57.838 12:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:57.838 12:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:57.838 12:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:57.838 12:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:57.838 12:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:57.838 12:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:57.838 12:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.838 12:11:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.838 12:11:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.838 12:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.838 12:11:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.838 12:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:57.838 "name": "raid_bdev1", 00:12:57.838 "uuid": "057436c2-de9b-4bde-8ed3-25ff8ca5811b", 00:12:57.838 "strip_size_kb": 0, 00:12:57.838 "state": "online", 00:12:57.838 "raid_level": "raid1", 00:12:57.838 "superblock": true, 00:12:57.838 "num_base_bdevs": 2, 00:12:57.838 "num_base_bdevs_discovered": 2, 00:12:57.838 "num_base_bdevs_operational": 2, 00:12:57.838 "base_bdevs_list": [ 00:12:57.838 { 00:12:57.838 "name": "BaseBdev1", 00:12:57.838 "uuid": "25e6bd52-f5f9-5f6e-9472-d9e0e22f4f39", 00:12:57.838 "is_configured": true, 00:12:57.838 "data_offset": 2048, 00:12:57.838 "data_size": 63488 00:12:57.838 }, 00:12:57.838 { 00:12:57.838 "name": "BaseBdev2", 00:12:57.838 "uuid": "221ec4f0-a099-594b-8212-c970fb75bde8", 00:12:57.838 "is_configured": true, 00:12:57.838 "data_offset": 2048, 00:12:57.838 "data_size": 63488 00:12:57.838 } 00:12:57.838 ] 00:12:57.838 }' 00:12:57.838 12:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:57.838 12:11:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.406 12:11:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:58.406 12:11:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:58.406 [2024-11-25 12:11:54.337456] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:59.339 12:11:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:59.339 12:11:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.339 12:11:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.339 [2024-11-25 12:11:55.237873] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:12:59.340 [2024-11-25 12:11:55.237941] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:59.340 [2024-11-25 12:11:55.238169] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:12:59.340 12:11:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.340 12:11:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:59.340 12:11:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:59.340 12:11:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:12:59.340 12:11:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:12:59.340 12:11:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:59.340 12:11:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:59.340 12:11:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:59.340 12:11:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:59.340 12:11:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:59.340 12:11:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:59.340 12:11:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.340 12:11:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.340 12:11:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.340 12:11:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.340 12:11:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.340 12:11:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.340 12:11:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.340 12:11:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.340 12:11:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.340 12:11:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.340 "name": "raid_bdev1", 00:12:59.340 "uuid": "057436c2-de9b-4bde-8ed3-25ff8ca5811b", 00:12:59.340 "strip_size_kb": 0, 00:12:59.340 "state": "online", 00:12:59.340 "raid_level": "raid1", 00:12:59.340 "superblock": true, 00:12:59.340 "num_base_bdevs": 2, 00:12:59.340 "num_base_bdevs_discovered": 1, 00:12:59.340 "num_base_bdevs_operational": 1, 00:12:59.340 "base_bdevs_list": [ 00:12:59.340 { 00:12:59.340 "name": null, 00:12:59.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.340 "is_configured": false, 00:12:59.340 "data_offset": 0, 00:12:59.340 "data_size": 63488 00:12:59.340 }, 00:12:59.340 { 00:12:59.340 "name": "BaseBdev2", 00:12:59.340 "uuid": "221ec4f0-a099-594b-8212-c970fb75bde8", 00:12:59.340 "is_configured": true, 00:12:59.340 "data_offset": 2048, 00:12:59.340 "data_size": 63488 00:12:59.340 } 00:12:59.340 ] 00:12:59.340 }' 00:12:59.340 12:11:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.340 12:11:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.905 12:11:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:59.905 12:11:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.905 12:11:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.905 [2024-11-25 12:11:55.769064] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:59.905 [2024-11-25 12:11:55.769224] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:59.905 [2024-11-25 12:11:55.772616] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:59.905 [2024-11-25 12:11:55.772781] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:59.905 [2024-11-25 12:11:55.772909] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:59.905 [2024-11-25 12:11:55.773083] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:59.905 { 00:12:59.905 "results": [ 00:12:59.905 { 00:12:59.905 "job": "raid_bdev1", 00:12:59.905 "core_mask": "0x1", 00:12:59.905 "workload": "randrw", 00:12:59.905 "percentage": 50, 00:12:59.905 "status": "finished", 00:12:59.905 "queue_depth": 1, 00:12:59.905 "io_size": 131072, 00:12:59.905 "runtime": 1.429124, 00:12:59.905 "iops": 14951.816637324682, 00:12:59.905 "mibps": 1868.9770796655853, 00:12:59.905 "io_failed": 0, 00:12:59.905 "io_timeout": 0, 00:12:59.905 "avg_latency_us": 62.85880535039651, 00:12:59.905 "min_latency_us": 40.72727272727273, 00:12:59.905 "max_latency_us": 1794.7927272727272 00:12:59.905 } 00:12:59.905 ], 00:12:59.905 "core_count": 1 00:12:59.905 } 00:12:59.905 12:11:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.905 12:11:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63692 00:12:59.905 12:11:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63692 ']' 00:12:59.905 12:11:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63692 00:12:59.905 12:11:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:59.905 12:11:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:59.905 12:11:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63692 00:12:59.905 killing process with pid 63692 00:12:59.905 12:11:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:59.905 12:11:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:59.905 12:11:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63692' 00:12:59.905 12:11:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63692 00:12:59.905 [2024-11-25 12:11:55.810892] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:59.905 12:11:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63692 00:12:59.905 [2024-11-25 12:11:55.930909] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:01.279 12:11:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.xwL9fMbcgP 00:13:01.279 12:11:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:01.279 12:11:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:01.279 12:11:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:13:01.279 12:11:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:13:01.279 ************************************ 00:13:01.279 END TEST raid_write_error_test 00:13:01.279 ************************************ 00:13:01.279 12:11:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:01.279 12:11:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:01.279 12:11:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:13:01.279 00:13:01.279 real 0m4.436s 00:13:01.279 user 0m5.517s 00:13:01.279 sys 0m0.535s 00:13:01.279 12:11:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:01.279 12:11:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.279 12:11:57 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:13:01.279 12:11:57 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:13:01.279 12:11:57 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:13:01.280 12:11:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:01.280 12:11:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:01.280 12:11:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:01.280 ************************************ 00:13:01.280 START TEST raid_state_function_test 00:13:01.280 ************************************ 00:13:01.280 12:11:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:13:01.280 12:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:13:01.280 12:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:13:01.280 12:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:01.280 12:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:01.280 12:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:01.280 12:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:01.280 12:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:01.280 12:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:01.280 12:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:01.280 12:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:01.280 12:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:01.280 12:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:01.280 12:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:01.280 12:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:01.280 12:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:01.280 12:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:01.280 12:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:01.280 12:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:01.280 12:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:01.280 12:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:01.280 12:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:01.280 12:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:13:01.280 12:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:01.280 12:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:01.280 12:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:01.280 12:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:01.280 12:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63835 00:13:01.280 Process raid pid: 63835 00:13:01.280 12:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63835' 00:13:01.280 12:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63835 00:13:01.280 12:11:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 63835 ']' 00:13:01.280 12:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:01.280 12:11:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:01.280 12:11:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:01.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:01.280 12:11:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:01.280 12:11:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:01.280 12:11:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.280 [2024-11-25 12:11:57.156260] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:13:01.280 [2024-11-25 12:11:57.156636] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:01.280 [2024-11-25 12:11:57.327769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.538 [2024-11-25 12:11:57.456031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.800 [2024-11-25 12:11:57.661213] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:01.800 [2024-11-25 12:11:57.661270] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:02.058 12:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:02.058 12:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:13:02.058 12:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:02.058 12:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.058 12:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.059 [2024-11-25 12:11:58.102726] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:02.059 [2024-11-25 12:11:58.102789] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:02.059 [2024-11-25 12:11:58.102806] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:02.059 [2024-11-25 12:11:58.102822] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:02.059 [2024-11-25 12:11:58.102832] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:02.059 [2024-11-25 12:11:58.102846] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:02.059 12:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.059 12:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:02.059 12:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:02.059 12:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:02.059 12:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:02.059 12:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:02.059 12:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:02.059 12:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.059 12:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.059 12:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.059 12:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:02.059 12:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.059 12:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:02.059 12:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.059 12:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.059 12:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.317 12:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.317 "name": "Existed_Raid", 00:13:02.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.317 "strip_size_kb": 64, 00:13:02.317 "state": "configuring", 00:13:02.317 "raid_level": "raid0", 00:13:02.317 "superblock": false, 00:13:02.317 "num_base_bdevs": 3, 00:13:02.317 "num_base_bdevs_discovered": 0, 00:13:02.317 "num_base_bdevs_operational": 3, 00:13:02.317 "base_bdevs_list": [ 00:13:02.317 { 00:13:02.317 "name": "BaseBdev1", 00:13:02.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.317 "is_configured": false, 00:13:02.317 "data_offset": 0, 00:13:02.317 "data_size": 0 00:13:02.317 }, 00:13:02.317 { 00:13:02.317 "name": "BaseBdev2", 00:13:02.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.317 "is_configured": false, 00:13:02.317 "data_offset": 0, 00:13:02.317 "data_size": 0 00:13:02.317 }, 00:13:02.317 { 00:13:02.317 "name": "BaseBdev3", 00:13:02.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.317 "is_configured": false, 00:13:02.317 "data_offset": 0, 00:13:02.317 "data_size": 0 00:13:02.317 } 00:13:02.317 ] 00:13:02.317 }' 00:13:02.318 12:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.318 12:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.576 12:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:02.576 12:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.576 12:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.576 [2024-11-25 12:11:58.554792] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:02.577 [2024-11-25 12:11:58.554837] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:02.577 12:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.577 12:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:02.577 12:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.577 12:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.577 [2024-11-25 12:11:58.562773] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:02.577 [2024-11-25 12:11:58.562826] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:02.577 [2024-11-25 12:11:58.562840] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:02.577 [2024-11-25 12:11:58.562855] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:02.577 [2024-11-25 12:11:58.562865] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:02.577 [2024-11-25 12:11:58.562879] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:02.577 12:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.577 12:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:02.577 12:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.577 12:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.577 [2024-11-25 12:11:58.607042] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:02.577 BaseBdev1 00:13:02.577 12:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.577 12:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:02.577 12:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:02.577 12:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:02.577 12:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:02.577 12:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:02.577 12:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:02.577 12:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:02.577 12:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.577 12:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.577 12:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.577 12:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:02.577 12:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.577 12:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.577 [ 00:13:02.577 { 00:13:02.577 "name": "BaseBdev1", 00:13:02.577 "aliases": [ 00:13:02.577 "669c50f4-6ba0-4dcd-91b1-f9cfd265fb74" 00:13:02.577 ], 00:13:02.577 "product_name": "Malloc disk", 00:13:02.577 "block_size": 512, 00:13:02.577 "num_blocks": 65536, 00:13:02.577 "uuid": "669c50f4-6ba0-4dcd-91b1-f9cfd265fb74", 00:13:02.577 "assigned_rate_limits": { 00:13:02.577 "rw_ios_per_sec": 0, 00:13:02.577 "rw_mbytes_per_sec": 0, 00:13:02.577 "r_mbytes_per_sec": 0, 00:13:02.577 "w_mbytes_per_sec": 0 00:13:02.577 }, 00:13:02.577 "claimed": true, 00:13:02.577 "claim_type": "exclusive_write", 00:13:02.577 "zoned": false, 00:13:02.577 "supported_io_types": { 00:13:02.577 "read": true, 00:13:02.577 "write": true, 00:13:02.577 "unmap": true, 00:13:02.577 "flush": true, 00:13:02.577 "reset": true, 00:13:02.577 "nvme_admin": false, 00:13:02.577 "nvme_io": false, 00:13:02.577 "nvme_io_md": false, 00:13:02.577 "write_zeroes": true, 00:13:02.577 "zcopy": true, 00:13:02.577 "get_zone_info": false, 00:13:02.577 "zone_management": false, 00:13:02.577 "zone_append": false, 00:13:02.577 "compare": false, 00:13:02.577 "compare_and_write": false, 00:13:02.577 "abort": true, 00:13:02.577 "seek_hole": false, 00:13:02.577 "seek_data": false, 00:13:02.577 "copy": true, 00:13:02.577 "nvme_iov_md": false 00:13:02.577 }, 00:13:02.577 "memory_domains": [ 00:13:02.577 { 00:13:02.577 "dma_device_id": "system", 00:13:02.577 "dma_device_type": 1 00:13:02.577 }, 00:13:02.577 { 00:13:02.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:02.577 "dma_device_type": 2 00:13:02.577 } 00:13:02.577 ], 00:13:02.577 "driver_specific": {} 00:13:02.577 } 00:13:02.577 ] 00:13:02.577 12:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.577 12:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:02.577 12:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:02.577 12:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:02.577 12:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:02.577 12:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:02.577 12:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:02.577 12:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:02.577 12:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.577 12:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.577 12:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.577 12:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:02.577 12:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.577 12:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.577 12:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.577 12:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:02.577 12:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.836 12:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.836 "name": "Existed_Raid", 00:13:02.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.836 "strip_size_kb": 64, 00:13:02.836 "state": "configuring", 00:13:02.836 "raid_level": "raid0", 00:13:02.836 "superblock": false, 00:13:02.836 "num_base_bdevs": 3, 00:13:02.836 "num_base_bdevs_discovered": 1, 00:13:02.836 "num_base_bdevs_operational": 3, 00:13:02.836 "base_bdevs_list": [ 00:13:02.836 { 00:13:02.836 "name": "BaseBdev1", 00:13:02.836 "uuid": "669c50f4-6ba0-4dcd-91b1-f9cfd265fb74", 00:13:02.836 "is_configured": true, 00:13:02.836 "data_offset": 0, 00:13:02.836 "data_size": 65536 00:13:02.836 }, 00:13:02.836 { 00:13:02.836 "name": "BaseBdev2", 00:13:02.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.836 "is_configured": false, 00:13:02.836 "data_offset": 0, 00:13:02.836 "data_size": 0 00:13:02.836 }, 00:13:02.836 { 00:13:02.836 "name": "BaseBdev3", 00:13:02.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.836 "is_configured": false, 00:13:02.836 "data_offset": 0, 00:13:02.836 "data_size": 0 00:13:02.836 } 00:13:02.836 ] 00:13:02.836 }' 00:13:02.836 12:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.836 12:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.096 12:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:03.096 12:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.096 12:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.096 [2024-11-25 12:11:59.111223] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:03.096 [2024-11-25 12:11:59.111286] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:03.096 12:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.096 12:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:03.096 12:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.096 12:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.096 [2024-11-25 12:11:59.119262] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:03.096 [2024-11-25 12:11:59.121703] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:03.096 [2024-11-25 12:11:59.121909] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:03.096 [2024-11-25 12:11:59.121937] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:03.096 [2024-11-25 12:11:59.121954] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:03.096 12:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.096 12:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:03.096 12:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:03.096 12:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:03.096 12:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:03.096 12:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:03.096 12:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:03.096 12:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:03.096 12:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:03.096 12:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.096 12:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.096 12:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.096 12:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.096 12:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.096 12:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.096 12:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.096 12:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:03.096 12:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.096 12:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.096 "name": "Existed_Raid", 00:13:03.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.096 "strip_size_kb": 64, 00:13:03.096 "state": "configuring", 00:13:03.096 "raid_level": "raid0", 00:13:03.096 "superblock": false, 00:13:03.096 "num_base_bdevs": 3, 00:13:03.096 "num_base_bdevs_discovered": 1, 00:13:03.096 "num_base_bdevs_operational": 3, 00:13:03.096 "base_bdevs_list": [ 00:13:03.096 { 00:13:03.096 "name": "BaseBdev1", 00:13:03.096 "uuid": "669c50f4-6ba0-4dcd-91b1-f9cfd265fb74", 00:13:03.096 "is_configured": true, 00:13:03.096 "data_offset": 0, 00:13:03.096 "data_size": 65536 00:13:03.096 }, 00:13:03.096 { 00:13:03.096 "name": "BaseBdev2", 00:13:03.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.096 "is_configured": false, 00:13:03.096 "data_offset": 0, 00:13:03.096 "data_size": 0 00:13:03.096 }, 00:13:03.096 { 00:13:03.096 "name": "BaseBdev3", 00:13:03.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.096 "is_configured": false, 00:13:03.096 "data_offset": 0, 00:13:03.096 "data_size": 0 00:13:03.096 } 00:13:03.096 ] 00:13:03.096 }' 00:13:03.096 12:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.096 12:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.661 12:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:03.661 12:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.661 12:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.661 [2024-11-25 12:11:59.633170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:03.661 BaseBdev2 00:13:03.661 12:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.661 12:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:03.661 12:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:03.661 12:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:03.661 12:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:03.661 12:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:03.661 12:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:03.661 12:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:03.661 12:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.661 12:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.661 12:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.661 12:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:03.661 12:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.661 12:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.661 [ 00:13:03.661 { 00:13:03.661 "name": "BaseBdev2", 00:13:03.661 "aliases": [ 00:13:03.661 "30616518-98ba-4801-b8aa-46f68d965644" 00:13:03.661 ], 00:13:03.661 "product_name": "Malloc disk", 00:13:03.661 "block_size": 512, 00:13:03.661 "num_blocks": 65536, 00:13:03.661 "uuid": "30616518-98ba-4801-b8aa-46f68d965644", 00:13:03.661 "assigned_rate_limits": { 00:13:03.661 "rw_ios_per_sec": 0, 00:13:03.661 "rw_mbytes_per_sec": 0, 00:13:03.661 "r_mbytes_per_sec": 0, 00:13:03.661 "w_mbytes_per_sec": 0 00:13:03.661 }, 00:13:03.661 "claimed": true, 00:13:03.661 "claim_type": "exclusive_write", 00:13:03.661 "zoned": false, 00:13:03.661 "supported_io_types": { 00:13:03.661 "read": true, 00:13:03.661 "write": true, 00:13:03.661 "unmap": true, 00:13:03.661 "flush": true, 00:13:03.661 "reset": true, 00:13:03.661 "nvme_admin": false, 00:13:03.661 "nvme_io": false, 00:13:03.661 "nvme_io_md": false, 00:13:03.661 "write_zeroes": true, 00:13:03.661 "zcopy": true, 00:13:03.661 "get_zone_info": false, 00:13:03.661 "zone_management": false, 00:13:03.661 "zone_append": false, 00:13:03.661 "compare": false, 00:13:03.661 "compare_and_write": false, 00:13:03.661 "abort": true, 00:13:03.661 "seek_hole": false, 00:13:03.661 "seek_data": false, 00:13:03.661 "copy": true, 00:13:03.661 "nvme_iov_md": false 00:13:03.661 }, 00:13:03.661 "memory_domains": [ 00:13:03.661 { 00:13:03.661 "dma_device_id": "system", 00:13:03.661 "dma_device_type": 1 00:13:03.661 }, 00:13:03.661 { 00:13:03.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:03.661 "dma_device_type": 2 00:13:03.661 } 00:13:03.662 ], 00:13:03.662 "driver_specific": {} 00:13:03.662 } 00:13:03.662 ] 00:13:03.662 12:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.662 12:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:03.662 12:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:03.662 12:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:03.662 12:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:03.662 12:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:03.662 12:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:03.662 12:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:03.662 12:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:03.662 12:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:03.662 12:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.662 12:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.662 12:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.662 12:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.662 12:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.662 12:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.662 12:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:03.662 12:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.662 12:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.662 12:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.662 "name": "Existed_Raid", 00:13:03.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.662 "strip_size_kb": 64, 00:13:03.662 "state": "configuring", 00:13:03.662 "raid_level": "raid0", 00:13:03.662 "superblock": false, 00:13:03.662 "num_base_bdevs": 3, 00:13:03.662 "num_base_bdevs_discovered": 2, 00:13:03.662 "num_base_bdevs_operational": 3, 00:13:03.662 "base_bdevs_list": [ 00:13:03.662 { 00:13:03.662 "name": "BaseBdev1", 00:13:03.662 "uuid": "669c50f4-6ba0-4dcd-91b1-f9cfd265fb74", 00:13:03.662 "is_configured": true, 00:13:03.662 "data_offset": 0, 00:13:03.662 "data_size": 65536 00:13:03.662 }, 00:13:03.662 { 00:13:03.662 "name": "BaseBdev2", 00:13:03.662 "uuid": "30616518-98ba-4801-b8aa-46f68d965644", 00:13:03.662 "is_configured": true, 00:13:03.662 "data_offset": 0, 00:13:03.662 "data_size": 65536 00:13:03.662 }, 00:13:03.662 { 00:13:03.662 "name": "BaseBdev3", 00:13:03.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.662 "is_configured": false, 00:13:03.662 "data_offset": 0, 00:13:03.662 "data_size": 0 00:13:03.662 } 00:13:03.662 ] 00:13:03.662 }' 00:13:03.662 12:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.662 12:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.229 12:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:04.229 12:12:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.229 12:12:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.229 [2024-11-25 12:12:00.250241] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:04.229 [2024-11-25 12:12:00.250295] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:04.229 [2024-11-25 12:12:00.250316] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:13:04.229 [2024-11-25 12:12:00.250708] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:04.229 [2024-11-25 12:12:00.251067] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:04.229 [2024-11-25 12:12:00.251091] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:04.229 [2024-11-25 12:12:00.251430] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:04.229 BaseBdev3 00:13:04.229 12:12:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.229 12:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:04.229 12:12:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:04.229 12:12:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:04.229 12:12:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:04.229 12:12:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:04.229 12:12:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:04.229 12:12:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:04.229 12:12:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.229 12:12:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.229 12:12:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.229 12:12:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:04.229 12:12:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.229 12:12:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.229 [ 00:13:04.229 { 00:13:04.229 "name": "BaseBdev3", 00:13:04.229 "aliases": [ 00:13:04.229 "8b3b4fd9-294f-4391-b143-f8361f4237a3" 00:13:04.229 ], 00:13:04.229 "product_name": "Malloc disk", 00:13:04.229 "block_size": 512, 00:13:04.229 "num_blocks": 65536, 00:13:04.229 "uuid": "8b3b4fd9-294f-4391-b143-f8361f4237a3", 00:13:04.229 "assigned_rate_limits": { 00:13:04.229 "rw_ios_per_sec": 0, 00:13:04.229 "rw_mbytes_per_sec": 0, 00:13:04.229 "r_mbytes_per_sec": 0, 00:13:04.229 "w_mbytes_per_sec": 0 00:13:04.229 }, 00:13:04.229 "claimed": true, 00:13:04.229 "claim_type": "exclusive_write", 00:13:04.229 "zoned": false, 00:13:04.229 "supported_io_types": { 00:13:04.229 "read": true, 00:13:04.229 "write": true, 00:13:04.229 "unmap": true, 00:13:04.229 "flush": true, 00:13:04.229 "reset": true, 00:13:04.229 "nvme_admin": false, 00:13:04.229 "nvme_io": false, 00:13:04.229 "nvme_io_md": false, 00:13:04.229 "write_zeroes": true, 00:13:04.229 "zcopy": true, 00:13:04.229 "get_zone_info": false, 00:13:04.229 "zone_management": false, 00:13:04.229 "zone_append": false, 00:13:04.229 "compare": false, 00:13:04.229 "compare_and_write": false, 00:13:04.229 "abort": true, 00:13:04.229 "seek_hole": false, 00:13:04.229 "seek_data": false, 00:13:04.229 "copy": true, 00:13:04.229 "nvme_iov_md": false 00:13:04.229 }, 00:13:04.229 "memory_domains": [ 00:13:04.229 { 00:13:04.229 "dma_device_id": "system", 00:13:04.229 "dma_device_type": 1 00:13:04.229 }, 00:13:04.229 { 00:13:04.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:04.229 "dma_device_type": 2 00:13:04.229 } 00:13:04.229 ], 00:13:04.229 "driver_specific": {} 00:13:04.229 } 00:13:04.229 ] 00:13:04.229 12:12:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.229 12:12:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:04.229 12:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:04.229 12:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:04.229 12:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:13:04.229 12:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:04.229 12:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:04.229 12:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:04.229 12:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:04.229 12:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:04.229 12:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.229 12:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.229 12:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.229 12:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.229 12:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.229 12:12:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.229 12:12:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.229 12:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:04.229 12:12:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.488 12:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.488 "name": "Existed_Raid", 00:13:04.488 "uuid": "dc955c5d-a4ed-49ec-a6c5-3e2ec2b41d33", 00:13:04.488 "strip_size_kb": 64, 00:13:04.488 "state": "online", 00:13:04.488 "raid_level": "raid0", 00:13:04.488 "superblock": false, 00:13:04.488 "num_base_bdevs": 3, 00:13:04.488 "num_base_bdevs_discovered": 3, 00:13:04.488 "num_base_bdevs_operational": 3, 00:13:04.488 "base_bdevs_list": [ 00:13:04.488 { 00:13:04.488 "name": "BaseBdev1", 00:13:04.488 "uuid": "669c50f4-6ba0-4dcd-91b1-f9cfd265fb74", 00:13:04.488 "is_configured": true, 00:13:04.488 "data_offset": 0, 00:13:04.488 "data_size": 65536 00:13:04.488 }, 00:13:04.488 { 00:13:04.488 "name": "BaseBdev2", 00:13:04.488 "uuid": "30616518-98ba-4801-b8aa-46f68d965644", 00:13:04.488 "is_configured": true, 00:13:04.488 "data_offset": 0, 00:13:04.488 "data_size": 65536 00:13:04.488 }, 00:13:04.488 { 00:13:04.488 "name": "BaseBdev3", 00:13:04.488 "uuid": "8b3b4fd9-294f-4391-b143-f8361f4237a3", 00:13:04.488 "is_configured": true, 00:13:04.488 "data_offset": 0, 00:13:04.488 "data_size": 65536 00:13:04.488 } 00:13:04.488 ] 00:13:04.488 }' 00:13:04.488 12:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.488 12:12:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.746 12:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:04.746 12:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:04.746 12:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:04.746 12:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:04.746 12:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:04.746 12:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:04.746 12:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:04.746 12:12:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.746 12:12:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.746 12:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:05.005 [2024-11-25 12:12:00.834853] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:05.005 12:12:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.005 12:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:05.005 "name": "Existed_Raid", 00:13:05.005 "aliases": [ 00:13:05.005 "dc955c5d-a4ed-49ec-a6c5-3e2ec2b41d33" 00:13:05.005 ], 00:13:05.005 "product_name": "Raid Volume", 00:13:05.005 "block_size": 512, 00:13:05.005 "num_blocks": 196608, 00:13:05.005 "uuid": "dc955c5d-a4ed-49ec-a6c5-3e2ec2b41d33", 00:13:05.005 "assigned_rate_limits": { 00:13:05.005 "rw_ios_per_sec": 0, 00:13:05.005 "rw_mbytes_per_sec": 0, 00:13:05.005 "r_mbytes_per_sec": 0, 00:13:05.005 "w_mbytes_per_sec": 0 00:13:05.005 }, 00:13:05.005 "claimed": false, 00:13:05.005 "zoned": false, 00:13:05.005 "supported_io_types": { 00:13:05.005 "read": true, 00:13:05.005 "write": true, 00:13:05.005 "unmap": true, 00:13:05.005 "flush": true, 00:13:05.005 "reset": true, 00:13:05.005 "nvme_admin": false, 00:13:05.005 "nvme_io": false, 00:13:05.005 "nvme_io_md": false, 00:13:05.005 "write_zeroes": true, 00:13:05.005 "zcopy": false, 00:13:05.005 "get_zone_info": false, 00:13:05.005 "zone_management": false, 00:13:05.005 "zone_append": false, 00:13:05.005 "compare": false, 00:13:05.005 "compare_and_write": false, 00:13:05.005 "abort": false, 00:13:05.005 "seek_hole": false, 00:13:05.005 "seek_data": false, 00:13:05.005 "copy": false, 00:13:05.005 "nvme_iov_md": false 00:13:05.005 }, 00:13:05.005 "memory_domains": [ 00:13:05.005 { 00:13:05.005 "dma_device_id": "system", 00:13:05.005 "dma_device_type": 1 00:13:05.005 }, 00:13:05.005 { 00:13:05.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:05.005 "dma_device_type": 2 00:13:05.005 }, 00:13:05.005 { 00:13:05.005 "dma_device_id": "system", 00:13:05.005 "dma_device_type": 1 00:13:05.005 }, 00:13:05.005 { 00:13:05.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:05.005 "dma_device_type": 2 00:13:05.005 }, 00:13:05.005 { 00:13:05.005 "dma_device_id": "system", 00:13:05.005 "dma_device_type": 1 00:13:05.005 }, 00:13:05.005 { 00:13:05.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:05.005 "dma_device_type": 2 00:13:05.005 } 00:13:05.005 ], 00:13:05.005 "driver_specific": { 00:13:05.005 "raid": { 00:13:05.005 "uuid": "dc955c5d-a4ed-49ec-a6c5-3e2ec2b41d33", 00:13:05.005 "strip_size_kb": 64, 00:13:05.005 "state": "online", 00:13:05.005 "raid_level": "raid0", 00:13:05.005 "superblock": false, 00:13:05.005 "num_base_bdevs": 3, 00:13:05.005 "num_base_bdevs_discovered": 3, 00:13:05.005 "num_base_bdevs_operational": 3, 00:13:05.005 "base_bdevs_list": [ 00:13:05.005 { 00:13:05.005 "name": "BaseBdev1", 00:13:05.005 "uuid": "669c50f4-6ba0-4dcd-91b1-f9cfd265fb74", 00:13:05.005 "is_configured": true, 00:13:05.005 "data_offset": 0, 00:13:05.005 "data_size": 65536 00:13:05.005 }, 00:13:05.005 { 00:13:05.005 "name": "BaseBdev2", 00:13:05.005 "uuid": "30616518-98ba-4801-b8aa-46f68d965644", 00:13:05.005 "is_configured": true, 00:13:05.005 "data_offset": 0, 00:13:05.005 "data_size": 65536 00:13:05.005 }, 00:13:05.005 { 00:13:05.005 "name": "BaseBdev3", 00:13:05.005 "uuid": "8b3b4fd9-294f-4391-b143-f8361f4237a3", 00:13:05.005 "is_configured": true, 00:13:05.005 "data_offset": 0, 00:13:05.005 "data_size": 65536 00:13:05.005 } 00:13:05.005 ] 00:13:05.005 } 00:13:05.005 } 00:13:05.005 }' 00:13:05.005 12:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:05.005 12:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:05.005 BaseBdev2 00:13:05.005 BaseBdev3' 00:13:05.005 12:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:05.005 12:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:05.005 12:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:05.005 12:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:05.005 12:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:05.005 12:12:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.005 12:12:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.005 12:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.005 12:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:05.005 12:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:05.005 12:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:05.005 12:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:05.005 12:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.006 12:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:05.006 12:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.006 12:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.006 12:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:05.006 12:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:05.006 12:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:05.006 12:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:05.006 12:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.006 12:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.006 12:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:05.264 12:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.264 12:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:05.264 12:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:05.264 12:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:05.264 12:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.264 12:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.264 [2024-11-25 12:12:01.134574] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:05.264 [2024-11-25 12:12:01.134608] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:05.264 [2024-11-25 12:12:01.134675] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:05.264 12:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.264 12:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:05.264 12:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:13:05.264 12:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:05.264 12:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:05.264 12:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:13:05.264 12:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:13:05.264 12:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:05.264 12:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:13:05.264 12:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:05.264 12:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:05.264 12:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:05.264 12:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:05.264 12:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:05.264 12:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:05.264 12:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:05.264 12:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:05.264 12:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.264 12:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.264 12:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.264 12:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.264 12:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.264 "name": "Existed_Raid", 00:13:05.264 "uuid": "dc955c5d-a4ed-49ec-a6c5-3e2ec2b41d33", 00:13:05.264 "strip_size_kb": 64, 00:13:05.264 "state": "offline", 00:13:05.264 "raid_level": "raid0", 00:13:05.264 "superblock": false, 00:13:05.264 "num_base_bdevs": 3, 00:13:05.264 "num_base_bdevs_discovered": 2, 00:13:05.264 "num_base_bdevs_operational": 2, 00:13:05.264 "base_bdevs_list": [ 00:13:05.264 { 00:13:05.264 "name": null, 00:13:05.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.264 "is_configured": false, 00:13:05.264 "data_offset": 0, 00:13:05.264 "data_size": 65536 00:13:05.264 }, 00:13:05.264 { 00:13:05.264 "name": "BaseBdev2", 00:13:05.264 "uuid": "30616518-98ba-4801-b8aa-46f68d965644", 00:13:05.264 "is_configured": true, 00:13:05.264 "data_offset": 0, 00:13:05.264 "data_size": 65536 00:13:05.264 }, 00:13:05.264 { 00:13:05.264 "name": "BaseBdev3", 00:13:05.264 "uuid": "8b3b4fd9-294f-4391-b143-f8361f4237a3", 00:13:05.264 "is_configured": true, 00:13:05.264 "data_offset": 0, 00:13:05.264 "data_size": 65536 00:13:05.264 } 00:13:05.264 ] 00:13:05.264 }' 00:13:05.264 12:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.265 12:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.831 12:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:05.831 12:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:05.832 12:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.832 12:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.832 12:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:05.832 12:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.832 12:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.832 12:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:05.832 12:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:05.832 12:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:05.832 12:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.832 12:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.832 [2024-11-25 12:12:01.754649] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:05.832 12:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.832 12:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:05.832 12:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:05.832 12:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.832 12:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:05.832 12:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.832 12:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.832 12:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.832 12:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:05.832 12:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:05.832 12:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:05.832 12:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.832 12:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.832 [2024-11-25 12:12:01.893429] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:05.832 [2024-11-25 12:12:01.893622] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:06.129 12:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.129 12:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:06.129 12:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:06.129 12:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:06.129 12:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.129 12:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.129 12:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.129 12:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.129 12:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:06.129 12:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:06.129 12:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:13:06.129 12:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:06.129 12:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:06.129 12:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:06.129 12:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.129 12:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.129 BaseBdev2 00:13:06.129 12:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.129 12:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:06.129 12:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:06.129 12:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:06.129 12:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:06.129 12:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:06.129 12:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:06.129 12:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:06.129 12:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.129 12:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.129 12:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.129 12:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:06.129 12:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.129 12:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.129 [ 00:13:06.129 { 00:13:06.129 "name": "BaseBdev2", 00:13:06.129 "aliases": [ 00:13:06.129 "992fd424-91be-46da-9a99-17d49470367e" 00:13:06.129 ], 00:13:06.129 "product_name": "Malloc disk", 00:13:06.129 "block_size": 512, 00:13:06.129 "num_blocks": 65536, 00:13:06.129 "uuid": "992fd424-91be-46da-9a99-17d49470367e", 00:13:06.129 "assigned_rate_limits": { 00:13:06.129 "rw_ios_per_sec": 0, 00:13:06.129 "rw_mbytes_per_sec": 0, 00:13:06.129 "r_mbytes_per_sec": 0, 00:13:06.129 "w_mbytes_per_sec": 0 00:13:06.129 }, 00:13:06.129 "claimed": false, 00:13:06.129 "zoned": false, 00:13:06.129 "supported_io_types": { 00:13:06.129 "read": true, 00:13:06.129 "write": true, 00:13:06.129 "unmap": true, 00:13:06.129 "flush": true, 00:13:06.129 "reset": true, 00:13:06.129 "nvme_admin": false, 00:13:06.129 "nvme_io": false, 00:13:06.129 "nvme_io_md": false, 00:13:06.129 "write_zeroes": true, 00:13:06.129 "zcopy": true, 00:13:06.129 "get_zone_info": false, 00:13:06.129 "zone_management": false, 00:13:06.129 "zone_append": false, 00:13:06.129 "compare": false, 00:13:06.129 "compare_and_write": false, 00:13:06.129 "abort": true, 00:13:06.129 "seek_hole": false, 00:13:06.129 "seek_data": false, 00:13:06.129 "copy": true, 00:13:06.129 "nvme_iov_md": false 00:13:06.129 }, 00:13:06.129 "memory_domains": [ 00:13:06.129 { 00:13:06.129 "dma_device_id": "system", 00:13:06.129 "dma_device_type": 1 00:13:06.129 }, 00:13:06.129 { 00:13:06.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:06.129 "dma_device_type": 2 00:13:06.129 } 00:13:06.129 ], 00:13:06.129 "driver_specific": {} 00:13:06.129 } 00:13:06.129 ] 00:13:06.129 12:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.129 12:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:06.129 12:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:06.130 12:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:06.130 12:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:06.130 12:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.130 12:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.130 BaseBdev3 00:13:06.130 12:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.130 12:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:06.130 12:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:06.130 12:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:06.130 12:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:06.130 12:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:06.130 12:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:06.130 12:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:06.130 12:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.130 12:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.130 12:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.130 12:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:06.130 12:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.130 12:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.130 [ 00:13:06.130 { 00:13:06.130 "name": "BaseBdev3", 00:13:06.130 "aliases": [ 00:13:06.130 "33e875b8-759a-4e47-b3e7-c134662ef0ee" 00:13:06.130 ], 00:13:06.130 "product_name": "Malloc disk", 00:13:06.130 "block_size": 512, 00:13:06.130 "num_blocks": 65536, 00:13:06.130 "uuid": "33e875b8-759a-4e47-b3e7-c134662ef0ee", 00:13:06.130 "assigned_rate_limits": { 00:13:06.130 "rw_ios_per_sec": 0, 00:13:06.130 "rw_mbytes_per_sec": 0, 00:13:06.130 "r_mbytes_per_sec": 0, 00:13:06.130 "w_mbytes_per_sec": 0 00:13:06.130 }, 00:13:06.130 "claimed": false, 00:13:06.130 "zoned": false, 00:13:06.130 "supported_io_types": { 00:13:06.130 "read": true, 00:13:06.130 "write": true, 00:13:06.130 "unmap": true, 00:13:06.130 "flush": true, 00:13:06.130 "reset": true, 00:13:06.130 "nvme_admin": false, 00:13:06.130 "nvme_io": false, 00:13:06.130 "nvme_io_md": false, 00:13:06.130 "write_zeroes": true, 00:13:06.130 "zcopy": true, 00:13:06.130 "get_zone_info": false, 00:13:06.130 "zone_management": false, 00:13:06.130 "zone_append": false, 00:13:06.130 "compare": false, 00:13:06.130 "compare_and_write": false, 00:13:06.130 "abort": true, 00:13:06.130 "seek_hole": false, 00:13:06.130 "seek_data": false, 00:13:06.130 "copy": true, 00:13:06.130 "nvme_iov_md": false 00:13:06.130 }, 00:13:06.130 "memory_domains": [ 00:13:06.130 { 00:13:06.130 "dma_device_id": "system", 00:13:06.130 "dma_device_type": 1 00:13:06.130 }, 00:13:06.130 { 00:13:06.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:06.130 "dma_device_type": 2 00:13:06.130 } 00:13:06.130 ], 00:13:06.130 "driver_specific": {} 00:13:06.130 } 00:13:06.130 ] 00:13:06.130 12:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.130 12:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:06.130 12:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:06.130 12:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:06.130 12:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:06.130 12:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.130 12:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.130 [2024-11-25 12:12:02.201510] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:06.130 [2024-11-25 12:12:02.201689] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:06.130 [2024-11-25 12:12:02.201832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:06.130 [2024-11-25 12:12:02.204310] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:06.130 12:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.130 12:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:06.400 12:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:06.400 12:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:06.400 12:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:06.400 12:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:06.400 12:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:06.400 12:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.400 12:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.400 12:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.400 12:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.400 12:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.400 12:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:06.400 12:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.400 12:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.400 12:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.400 12:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.400 "name": "Existed_Raid", 00:13:06.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.400 "strip_size_kb": 64, 00:13:06.400 "state": "configuring", 00:13:06.400 "raid_level": "raid0", 00:13:06.400 "superblock": false, 00:13:06.400 "num_base_bdevs": 3, 00:13:06.400 "num_base_bdevs_discovered": 2, 00:13:06.400 "num_base_bdevs_operational": 3, 00:13:06.400 "base_bdevs_list": [ 00:13:06.400 { 00:13:06.400 "name": "BaseBdev1", 00:13:06.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.400 "is_configured": false, 00:13:06.400 "data_offset": 0, 00:13:06.400 "data_size": 0 00:13:06.400 }, 00:13:06.400 { 00:13:06.400 "name": "BaseBdev2", 00:13:06.400 "uuid": "992fd424-91be-46da-9a99-17d49470367e", 00:13:06.400 "is_configured": true, 00:13:06.400 "data_offset": 0, 00:13:06.400 "data_size": 65536 00:13:06.400 }, 00:13:06.400 { 00:13:06.400 "name": "BaseBdev3", 00:13:06.400 "uuid": "33e875b8-759a-4e47-b3e7-c134662ef0ee", 00:13:06.400 "is_configured": true, 00:13:06.400 "data_offset": 0, 00:13:06.400 "data_size": 65536 00:13:06.400 } 00:13:06.400 ] 00:13:06.400 }' 00:13:06.400 12:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.400 12:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.659 12:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:06.659 12:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.659 12:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.659 [2024-11-25 12:12:02.725720] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:06.659 12:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.659 12:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:06.659 12:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:06.659 12:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:06.659 12:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:06.659 12:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:06.659 12:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:06.659 12:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.659 12:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.659 12:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.659 12:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.659 12:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.659 12:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:06.659 12:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.659 12:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.919 12:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.919 12:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.919 "name": "Existed_Raid", 00:13:06.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.919 "strip_size_kb": 64, 00:13:06.919 "state": "configuring", 00:13:06.919 "raid_level": "raid0", 00:13:06.919 "superblock": false, 00:13:06.919 "num_base_bdevs": 3, 00:13:06.919 "num_base_bdevs_discovered": 1, 00:13:06.919 "num_base_bdevs_operational": 3, 00:13:06.919 "base_bdevs_list": [ 00:13:06.919 { 00:13:06.919 "name": "BaseBdev1", 00:13:06.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.919 "is_configured": false, 00:13:06.919 "data_offset": 0, 00:13:06.919 "data_size": 0 00:13:06.919 }, 00:13:06.919 { 00:13:06.919 "name": null, 00:13:06.919 "uuid": "992fd424-91be-46da-9a99-17d49470367e", 00:13:06.919 "is_configured": false, 00:13:06.919 "data_offset": 0, 00:13:06.919 "data_size": 65536 00:13:06.919 }, 00:13:06.919 { 00:13:06.919 "name": "BaseBdev3", 00:13:06.919 "uuid": "33e875b8-759a-4e47-b3e7-c134662ef0ee", 00:13:06.919 "is_configured": true, 00:13:06.919 "data_offset": 0, 00:13:06.919 "data_size": 65536 00:13:06.919 } 00:13:06.919 ] 00:13:06.919 }' 00:13:06.919 12:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.919 12:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.177 12:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.177 12:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:07.177 12:12:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.177 12:12:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.436 12:12:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.436 12:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:07.436 12:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:07.436 12:12:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.436 12:12:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.436 [2024-11-25 12:12:03.371912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:07.436 BaseBdev1 00:13:07.436 12:12:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.436 12:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:07.436 12:12:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:07.436 12:12:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:07.436 12:12:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:07.436 12:12:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:07.436 12:12:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:07.436 12:12:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:07.436 12:12:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.436 12:12:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.436 12:12:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.437 12:12:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:07.437 12:12:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.437 12:12:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.437 [ 00:13:07.437 { 00:13:07.437 "name": "BaseBdev1", 00:13:07.437 "aliases": [ 00:13:07.437 "83ba033b-7f64-4d75-bbac-f8c4e3a18101" 00:13:07.437 ], 00:13:07.437 "product_name": "Malloc disk", 00:13:07.437 "block_size": 512, 00:13:07.437 "num_blocks": 65536, 00:13:07.437 "uuid": "83ba033b-7f64-4d75-bbac-f8c4e3a18101", 00:13:07.437 "assigned_rate_limits": { 00:13:07.437 "rw_ios_per_sec": 0, 00:13:07.437 "rw_mbytes_per_sec": 0, 00:13:07.437 "r_mbytes_per_sec": 0, 00:13:07.437 "w_mbytes_per_sec": 0 00:13:07.437 }, 00:13:07.437 "claimed": true, 00:13:07.437 "claim_type": "exclusive_write", 00:13:07.437 "zoned": false, 00:13:07.437 "supported_io_types": { 00:13:07.437 "read": true, 00:13:07.437 "write": true, 00:13:07.437 "unmap": true, 00:13:07.437 "flush": true, 00:13:07.437 "reset": true, 00:13:07.437 "nvme_admin": false, 00:13:07.437 "nvme_io": false, 00:13:07.437 "nvme_io_md": false, 00:13:07.437 "write_zeroes": true, 00:13:07.437 "zcopy": true, 00:13:07.437 "get_zone_info": false, 00:13:07.437 "zone_management": false, 00:13:07.437 "zone_append": false, 00:13:07.437 "compare": false, 00:13:07.437 "compare_and_write": false, 00:13:07.437 "abort": true, 00:13:07.437 "seek_hole": false, 00:13:07.437 "seek_data": false, 00:13:07.437 "copy": true, 00:13:07.437 "nvme_iov_md": false 00:13:07.437 }, 00:13:07.437 "memory_domains": [ 00:13:07.437 { 00:13:07.437 "dma_device_id": "system", 00:13:07.437 "dma_device_type": 1 00:13:07.437 }, 00:13:07.437 { 00:13:07.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:07.437 "dma_device_type": 2 00:13:07.437 } 00:13:07.437 ], 00:13:07.437 "driver_specific": {} 00:13:07.437 } 00:13:07.437 ] 00:13:07.437 12:12:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.437 12:12:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:07.437 12:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:07.437 12:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:07.437 12:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:07.437 12:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:07.437 12:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:07.437 12:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:07.437 12:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:07.437 12:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:07.437 12:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:07.437 12:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:07.437 12:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.437 12:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:07.437 12:12:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.437 12:12:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.437 12:12:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.437 12:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.437 "name": "Existed_Raid", 00:13:07.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.437 "strip_size_kb": 64, 00:13:07.437 "state": "configuring", 00:13:07.437 "raid_level": "raid0", 00:13:07.437 "superblock": false, 00:13:07.437 "num_base_bdevs": 3, 00:13:07.437 "num_base_bdevs_discovered": 2, 00:13:07.437 "num_base_bdevs_operational": 3, 00:13:07.437 "base_bdevs_list": [ 00:13:07.437 { 00:13:07.437 "name": "BaseBdev1", 00:13:07.437 "uuid": "83ba033b-7f64-4d75-bbac-f8c4e3a18101", 00:13:07.437 "is_configured": true, 00:13:07.437 "data_offset": 0, 00:13:07.437 "data_size": 65536 00:13:07.437 }, 00:13:07.437 { 00:13:07.437 "name": null, 00:13:07.437 "uuid": "992fd424-91be-46da-9a99-17d49470367e", 00:13:07.437 "is_configured": false, 00:13:07.437 "data_offset": 0, 00:13:07.437 "data_size": 65536 00:13:07.437 }, 00:13:07.437 { 00:13:07.437 "name": "BaseBdev3", 00:13:07.437 "uuid": "33e875b8-759a-4e47-b3e7-c134662ef0ee", 00:13:07.437 "is_configured": true, 00:13:07.437 "data_offset": 0, 00:13:07.437 "data_size": 65536 00:13:07.437 } 00:13:07.437 ] 00:13:07.437 }' 00:13:07.437 12:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.437 12:12:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.004 12:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.004 12:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:08.004 12:12:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.004 12:12:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.004 12:12:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.004 12:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:08.004 12:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:08.004 12:12:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.004 12:12:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.004 [2024-11-25 12:12:03.984139] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:08.004 12:12:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.004 12:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:08.004 12:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:08.004 12:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:08.004 12:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:08.004 12:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:08.004 12:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:08.004 12:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:08.004 12:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:08.004 12:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:08.004 12:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:08.004 12:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.004 12:12:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.004 12:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:08.004 12:12:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.004 12:12:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.004 12:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:08.004 "name": "Existed_Raid", 00:13:08.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.004 "strip_size_kb": 64, 00:13:08.004 "state": "configuring", 00:13:08.004 "raid_level": "raid0", 00:13:08.004 "superblock": false, 00:13:08.004 "num_base_bdevs": 3, 00:13:08.004 "num_base_bdevs_discovered": 1, 00:13:08.004 "num_base_bdevs_operational": 3, 00:13:08.004 "base_bdevs_list": [ 00:13:08.004 { 00:13:08.004 "name": "BaseBdev1", 00:13:08.004 "uuid": "83ba033b-7f64-4d75-bbac-f8c4e3a18101", 00:13:08.004 "is_configured": true, 00:13:08.004 "data_offset": 0, 00:13:08.004 "data_size": 65536 00:13:08.004 }, 00:13:08.004 { 00:13:08.004 "name": null, 00:13:08.004 "uuid": "992fd424-91be-46da-9a99-17d49470367e", 00:13:08.004 "is_configured": false, 00:13:08.004 "data_offset": 0, 00:13:08.004 "data_size": 65536 00:13:08.004 }, 00:13:08.004 { 00:13:08.004 "name": null, 00:13:08.004 "uuid": "33e875b8-759a-4e47-b3e7-c134662ef0ee", 00:13:08.004 "is_configured": false, 00:13:08.004 "data_offset": 0, 00:13:08.004 "data_size": 65536 00:13:08.004 } 00:13:08.004 ] 00:13:08.004 }' 00:13:08.004 12:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:08.004 12:12:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.571 12:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.571 12:12:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.571 12:12:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.571 12:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:08.571 12:12:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.571 12:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:08.571 12:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:08.571 12:12:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.571 12:12:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.571 [2024-11-25 12:12:04.572370] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:08.571 12:12:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.571 12:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:08.571 12:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:08.571 12:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:08.571 12:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:08.571 12:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:08.571 12:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:08.571 12:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:08.571 12:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:08.571 12:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:08.571 12:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:08.571 12:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.571 12:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:08.571 12:12:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.571 12:12:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.571 12:12:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.571 12:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:08.571 "name": "Existed_Raid", 00:13:08.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.571 "strip_size_kb": 64, 00:13:08.571 "state": "configuring", 00:13:08.571 "raid_level": "raid0", 00:13:08.571 "superblock": false, 00:13:08.571 "num_base_bdevs": 3, 00:13:08.571 "num_base_bdevs_discovered": 2, 00:13:08.571 "num_base_bdevs_operational": 3, 00:13:08.571 "base_bdevs_list": [ 00:13:08.571 { 00:13:08.571 "name": "BaseBdev1", 00:13:08.571 "uuid": "83ba033b-7f64-4d75-bbac-f8c4e3a18101", 00:13:08.571 "is_configured": true, 00:13:08.571 "data_offset": 0, 00:13:08.571 "data_size": 65536 00:13:08.571 }, 00:13:08.571 { 00:13:08.571 "name": null, 00:13:08.571 "uuid": "992fd424-91be-46da-9a99-17d49470367e", 00:13:08.571 "is_configured": false, 00:13:08.571 "data_offset": 0, 00:13:08.571 "data_size": 65536 00:13:08.571 }, 00:13:08.571 { 00:13:08.571 "name": "BaseBdev3", 00:13:08.571 "uuid": "33e875b8-759a-4e47-b3e7-c134662ef0ee", 00:13:08.571 "is_configured": true, 00:13:08.571 "data_offset": 0, 00:13:08.571 "data_size": 65536 00:13:08.571 } 00:13:08.571 ] 00:13:08.571 }' 00:13:08.571 12:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:08.571 12:12:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.137 12:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.137 12:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.137 12:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:09.137 12:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.137 12:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.137 12:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:09.137 12:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:09.137 12:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.137 12:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.137 [2024-11-25 12:12:05.064476] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:09.137 12:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.137 12:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:09.137 12:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:09.137 12:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:09.137 12:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:09.137 12:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:09.137 12:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:09.137 12:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.137 12:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.137 12:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.137 12:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.137 12:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.137 12:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:09.137 12:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.137 12:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.137 12:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.137 12:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.137 "name": "Existed_Raid", 00:13:09.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.137 "strip_size_kb": 64, 00:13:09.137 "state": "configuring", 00:13:09.137 "raid_level": "raid0", 00:13:09.137 "superblock": false, 00:13:09.137 "num_base_bdevs": 3, 00:13:09.137 "num_base_bdevs_discovered": 1, 00:13:09.137 "num_base_bdevs_operational": 3, 00:13:09.137 "base_bdevs_list": [ 00:13:09.137 { 00:13:09.137 "name": null, 00:13:09.137 "uuid": "83ba033b-7f64-4d75-bbac-f8c4e3a18101", 00:13:09.137 "is_configured": false, 00:13:09.137 "data_offset": 0, 00:13:09.137 "data_size": 65536 00:13:09.137 }, 00:13:09.137 { 00:13:09.137 "name": null, 00:13:09.137 "uuid": "992fd424-91be-46da-9a99-17d49470367e", 00:13:09.137 "is_configured": false, 00:13:09.137 "data_offset": 0, 00:13:09.137 "data_size": 65536 00:13:09.137 }, 00:13:09.137 { 00:13:09.137 "name": "BaseBdev3", 00:13:09.137 "uuid": "33e875b8-759a-4e47-b3e7-c134662ef0ee", 00:13:09.137 "is_configured": true, 00:13:09.137 "data_offset": 0, 00:13:09.137 "data_size": 65536 00:13:09.137 } 00:13:09.137 ] 00:13:09.137 }' 00:13:09.137 12:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.137 12:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.703 12:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.703 12:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:09.703 12:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.704 12:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.704 12:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.704 12:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:09.704 12:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:09.704 12:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.704 12:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.704 [2024-11-25 12:12:05.736403] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:09.704 12:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.704 12:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:09.704 12:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:09.704 12:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:09.704 12:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:09.704 12:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:09.704 12:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:09.704 12:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.704 12:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.704 12:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.704 12:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.704 12:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.704 12:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.704 12:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.704 12:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:09.704 12:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.961 12:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.961 "name": "Existed_Raid", 00:13:09.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.961 "strip_size_kb": 64, 00:13:09.961 "state": "configuring", 00:13:09.961 "raid_level": "raid0", 00:13:09.961 "superblock": false, 00:13:09.961 "num_base_bdevs": 3, 00:13:09.961 "num_base_bdevs_discovered": 2, 00:13:09.961 "num_base_bdevs_operational": 3, 00:13:09.961 "base_bdevs_list": [ 00:13:09.961 { 00:13:09.961 "name": null, 00:13:09.961 "uuid": "83ba033b-7f64-4d75-bbac-f8c4e3a18101", 00:13:09.961 "is_configured": false, 00:13:09.961 "data_offset": 0, 00:13:09.961 "data_size": 65536 00:13:09.961 }, 00:13:09.961 { 00:13:09.961 "name": "BaseBdev2", 00:13:09.961 "uuid": "992fd424-91be-46da-9a99-17d49470367e", 00:13:09.961 "is_configured": true, 00:13:09.962 "data_offset": 0, 00:13:09.962 "data_size": 65536 00:13:09.962 }, 00:13:09.962 { 00:13:09.962 "name": "BaseBdev3", 00:13:09.962 "uuid": "33e875b8-759a-4e47-b3e7-c134662ef0ee", 00:13:09.962 "is_configured": true, 00:13:09.962 "data_offset": 0, 00:13:09.962 "data_size": 65536 00:13:09.962 } 00:13:09.962 ] 00:13:09.962 }' 00:13:09.962 12:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.962 12:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.220 12:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.220 12:12:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.220 12:12:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.220 12:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:10.220 12:12:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.480 12:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:10.480 12:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:10.480 12:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.480 12:12:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.480 12:12:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.480 12:12:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.480 12:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 83ba033b-7f64-4d75-bbac-f8c4e3a18101 00:13:10.480 12:12:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.480 12:12:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.480 [2024-11-25 12:12:06.406290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:10.480 [2024-11-25 12:12:06.406371] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:10.480 [2024-11-25 12:12:06.406390] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:13:10.480 [2024-11-25 12:12:06.406718] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:10.480 [2024-11-25 12:12:06.406904] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:10.480 [2024-11-25 12:12:06.406920] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:10.480 [2024-11-25 12:12:06.407218] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:10.480 NewBaseBdev 00:13:10.480 12:12:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.480 12:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:10.480 12:12:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:10.480 12:12:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:10.480 12:12:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:10.480 12:12:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:10.480 12:12:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:10.480 12:12:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:10.480 12:12:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.480 12:12:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.480 12:12:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.480 12:12:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:10.480 12:12:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.480 12:12:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.480 [ 00:13:10.480 { 00:13:10.480 "name": "NewBaseBdev", 00:13:10.480 "aliases": [ 00:13:10.480 "83ba033b-7f64-4d75-bbac-f8c4e3a18101" 00:13:10.480 ], 00:13:10.480 "product_name": "Malloc disk", 00:13:10.480 "block_size": 512, 00:13:10.480 "num_blocks": 65536, 00:13:10.480 "uuid": "83ba033b-7f64-4d75-bbac-f8c4e3a18101", 00:13:10.480 "assigned_rate_limits": { 00:13:10.480 "rw_ios_per_sec": 0, 00:13:10.480 "rw_mbytes_per_sec": 0, 00:13:10.480 "r_mbytes_per_sec": 0, 00:13:10.480 "w_mbytes_per_sec": 0 00:13:10.480 }, 00:13:10.480 "claimed": true, 00:13:10.480 "claim_type": "exclusive_write", 00:13:10.480 "zoned": false, 00:13:10.480 "supported_io_types": { 00:13:10.480 "read": true, 00:13:10.480 "write": true, 00:13:10.480 "unmap": true, 00:13:10.480 "flush": true, 00:13:10.480 "reset": true, 00:13:10.480 "nvme_admin": false, 00:13:10.480 "nvme_io": false, 00:13:10.480 "nvme_io_md": false, 00:13:10.480 "write_zeroes": true, 00:13:10.480 "zcopy": true, 00:13:10.480 "get_zone_info": false, 00:13:10.480 "zone_management": false, 00:13:10.480 "zone_append": false, 00:13:10.480 "compare": false, 00:13:10.480 "compare_and_write": false, 00:13:10.480 "abort": true, 00:13:10.480 "seek_hole": false, 00:13:10.480 "seek_data": false, 00:13:10.480 "copy": true, 00:13:10.480 "nvme_iov_md": false 00:13:10.480 }, 00:13:10.480 "memory_domains": [ 00:13:10.480 { 00:13:10.480 "dma_device_id": "system", 00:13:10.480 "dma_device_type": 1 00:13:10.480 }, 00:13:10.480 { 00:13:10.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:10.480 "dma_device_type": 2 00:13:10.480 } 00:13:10.480 ], 00:13:10.480 "driver_specific": {} 00:13:10.480 } 00:13:10.480 ] 00:13:10.480 12:12:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.480 12:12:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:10.480 12:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:13:10.481 12:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:10.481 12:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:10.481 12:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:10.481 12:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:10.481 12:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:10.481 12:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.481 12:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.481 12:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.481 12:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.481 12:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.481 12:12:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.481 12:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:10.481 12:12:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.481 12:12:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.481 12:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.481 "name": "Existed_Raid", 00:13:10.481 "uuid": "d03b4eeb-59ac-4da0-8700-7c8d192c907e", 00:13:10.481 "strip_size_kb": 64, 00:13:10.481 "state": "online", 00:13:10.481 "raid_level": "raid0", 00:13:10.481 "superblock": false, 00:13:10.481 "num_base_bdevs": 3, 00:13:10.481 "num_base_bdevs_discovered": 3, 00:13:10.481 "num_base_bdevs_operational": 3, 00:13:10.481 "base_bdevs_list": [ 00:13:10.481 { 00:13:10.481 "name": "NewBaseBdev", 00:13:10.481 "uuid": "83ba033b-7f64-4d75-bbac-f8c4e3a18101", 00:13:10.481 "is_configured": true, 00:13:10.481 "data_offset": 0, 00:13:10.481 "data_size": 65536 00:13:10.481 }, 00:13:10.481 { 00:13:10.481 "name": "BaseBdev2", 00:13:10.481 "uuid": "992fd424-91be-46da-9a99-17d49470367e", 00:13:10.481 "is_configured": true, 00:13:10.481 "data_offset": 0, 00:13:10.481 "data_size": 65536 00:13:10.481 }, 00:13:10.481 { 00:13:10.481 "name": "BaseBdev3", 00:13:10.481 "uuid": "33e875b8-759a-4e47-b3e7-c134662ef0ee", 00:13:10.481 "is_configured": true, 00:13:10.481 "data_offset": 0, 00:13:10.481 "data_size": 65536 00:13:10.481 } 00:13:10.481 ] 00:13:10.481 }' 00:13:10.481 12:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.481 12:12:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.049 12:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:11.049 12:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:11.049 12:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:11.049 12:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:11.049 12:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:11.049 12:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:11.049 12:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:11.049 12:12:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.049 12:12:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.049 12:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:11.049 [2024-11-25 12:12:06.950890] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:11.049 12:12:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.049 12:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:11.049 "name": "Existed_Raid", 00:13:11.049 "aliases": [ 00:13:11.049 "d03b4eeb-59ac-4da0-8700-7c8d192c907e" 00:13:11.049 ], 00:13:11.049 "product_name": "Raid Volume", 00:13:11.049 "block_size": 512, 00:13:11.049 "num_blocks": 196608, 00:13:11.049 "uuid": "d03b4eeb-59ac-4da0-8700-7c8d192c907e", 00:13:11.049 "assigned_rate_limits": { 00:13:11.049 "rw_ios_per_sec": 0, 00:13:11.049 "rw_mbytes_per_sec": 0, 00:13:11.049 "r_mbytes_per_sec": 0, 00:13:11.049 "w_mbytes_per_sec": 0 00:13:11.049 }, 00:13:11.049 "claimed": false, 00:13:11.050 "zoned": false, 00:13:11.050 "supported_io_types": { 00:13:11.050 "read": true, 00:13:11.050 "write": true, 00:13:11.050 "unmap": true, 00:13:11.050 "flush": true, 00:13:11.050 "reset": true, 00:13:11.050 "nvme_admin": false, 00:13:11.050 "nvme_io": false, 00:13:11.050 "nvme_io_md": false, 00:13:11.050 "write_zeroes": true, 00:13:11.050 "zcopy": false, 00:13:11.050 "get_zone_info": false, 00:13:11.050 "zone_management": false, 00:13:11.050 "zone_append": false, 00:13:11.050 "compare": false, 00:13:11.050 "compare_and_write": false, 00:13:11.050 "abort": false, 00:13:11.050 "seek_hole": false, 00:13:11.050 "seek_data": false, 00:13:11.050 "copy": false, 00:13:11.050 "nvme_iov_md": false 00:13:11.050 }, 00:13:11.050 "memory_domains": [ 00:13:11.050 { 00:13:11.050 "dma_device_id": "system", 00:13:11.050 "dma_device_type": 1 00:13:11.050 }, 00:13:11.050 { 00:13:11.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.050 "dma_device_type": 2 00:13:11.050 }, 00:13:11.050 { 00:13:11.050 "dma_device_id": "system", 00:13:11.050 "dma_device_type": 1 00:13:11.050 }, 00:13:11.050 { 00:13:11.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.050 "dma_device_type": 2 00:13:11.050 }, 00:13:11.050 { 00:13:11.050 "dma_device_id": "system", 00:13:11.050 "dma_device_type": 1 00:13:11.050 }, 00:13:11.050 { 00:13:11.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.050 "dma_device_type": 2 00:13:11.050 } 00:13:11.050 ], 00:13:11.050 "driver_specific": { 00:13:11.050 "raid": { 00:13:11.050 "uuid": "d03b4eeb-59ac-4da0-8700-7c8d192c907e", 00:13:11.050 "strip_size_kb": 64, 00:13:11.050 "state": "online", 00:13:11.050 "raid_level": "raid0", 00:13:11.050 "superblock": false, 00:13:11.050 "num_base_bdevs": 3, 00:13:11.050 "num_base_bdevs_discovered": 3, 00:13:11.050 "num_base_bdevs_operational": 3, 00:13:11.050 "base_bdevs_list": [ 00:13:11.050 { 00:13:11.050 "name": "NewBaseBdev", 00:13:11.050 "uuid": "83ba033b-7f64-4d75-bbac-f8c4e3a18101", 00:13:11.050 "is_configured": true, 00:13:11.050 "data_offset": 0, 00:13:11.050 "data_size": 65536 00:13:11.050 }, 00:13:11.050 { 00:13:11.050 "name": "BaseBdev2", 00:13:11.050 "uuid": "992fd424-91be-46da-9a99-17d49470367e", 00:13:11.050 "is_configured": true, 00:13:11.050 "data_offset": 0, 00:13:11.050 "data_size": 65536 00:13:11.050 }, 00:13:11.050 { 00:13:11.050 "name": "BaseBdev3", 00:13:11.050 "uuid": "33e875b8-759a-4e47-b3e7-c134662ef0ee", 00:13:11.050 "is_configured": true, 00:13:11.050 "data_offset": 0, 00:13:11.050 "data_size": 65536 00:13:11.050 } 00:13:11.050 ] 00:13:11.050 } 00:13:11.050 } 00:13:11.050 }' 00:13:11.050 12:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:11.050 12:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:11.050 BaseBdev2 00:13:11.050 BaseBdev3' 00:13:11.050 12:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:11.050 12:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:11.050 12:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:11.050 12:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:11.050 12:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:11.050 12:12:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.050 12:12:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.050 12:12:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.050 12:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:11.050 12:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:11.050 12:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:11.309 12:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:11.309 12:12:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.309 12:12:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.309 12:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:11.309 12:12:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.309 12:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:11.309 12:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:11.309 12:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:11.309 12:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:11.309 12:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:11.309 12:12:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.309 12:12:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.309 12:12:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.309 12:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:11.309 12:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:11.309 12:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:11.309 12:12:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.309 12:12:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.309 [2024-11-25 12:12:07.242557] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:11.309 [2024-11-25 12:12:07.242707] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:11.309 [2024-11-25 12:12:07.242835] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:11.309 [2024-11-25 12:12:07.242908] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:11.309 [2024-11-25 12:12:07.242928] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:11.309 12:12:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.309 12:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63835 00:13:11.309 12:12:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 63835 ']' 00:13:11.309 12:12:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 63835 00:13:11.309 12:12:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:13:11.309 12:12:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:11.309 12:12:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63835 00:13:11.309 killing process with pid 63835 00:13:11.309 12:12:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:11.309 12:12:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:11.309 12:12:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63835' 00:13:11.309 12:12:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 63835 00:13:11.309 [2024-11-25 12:12:07.278912] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:11.309 12:12:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 63835 00:13:11.569 [2024-11-25 12:12:07.549773] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:12.946 12:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:12.946 00:13:12.946 real 0m11.546s 00:13:12.946 user 0m19.192s 00:13:12.946 sys 0m1.475s 00:13:12.946 12:12:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:12.946 12:12:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.946 ************************************ 00:13:12.946 END TEST raid_state_function_test 00:13:12.946 ************************************ 00:13:12.946 12:12:08 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:13:12.946 12:12:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:12.946 12:12:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:12.946 12:12:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:12.946 ************************************ 00:13:12.946 START TEST raid_state_function_test_sb 00:13:12.946 ************************************ 00:13:12.946 12:12:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:13:12.946 12:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:13:12.946 12:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:13:12.946 12:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:12.946 12:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:12.946 12:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:12.946 12:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:12.946 12:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:12.946 12:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:12.946 12:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:12.946 12:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:12.946 12:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:12.946 12:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:12.946 12:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:12.946 12:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:12.946 12:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:12.946 12:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:12.946 12:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:12.946 12:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:12.946 12:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:12.946 12:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:12.946 12:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:12.946 12:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:13:12.946 12:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:12.946 12:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:12.946 12:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:12.946 12:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:12.946 12:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64466 00:13:12.946 12:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64466' 00:13:12.946 Process raid pid: 64466 00:13:12.946 12:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:12.946 12:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64466 00:13:12.946 12:12:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64466 ']' 00:13:12.946 12:12:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:12.946 12:12:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:12.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:12.946 12:12:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:12.946 12:12:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:12.946 12:12:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.946 [2024-11-25 12:12:08.766821] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:13:12.946 [2024-11-25 12:12:08.767008] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:12.946 [2024-11-25 12:12:08.958831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:13.216 [2024-11-25 12:12:09.120004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.475 [2024-11-25 12:12:09.335976] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:13.475 [2024-11-25 12:12:09.336052] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:13.735 12:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:13.735 12:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:13.735 12:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:13.735 12:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.735 12:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.735 [2024-11-25 12:12:09.690613] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:13.735 [2024-11-25 12:12:09.690678] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:13.735 [2024-11-25 12:12:09.690695] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:13.735 [2024-11-25 12:12:09.690713] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:13.735 [2024-11-25 12:12:09.690723] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:13.735 [2024-11-25 12:12:09.690738] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:13.735 12:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.735 12:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:13.735 12:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:13.735 12:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:13.735 12:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:13.735 12:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:13.735 12:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:13.735 12:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.735 12:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.735 12:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.735 12:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.735 12:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.735 12:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.735 12:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:13.735 12:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.735 12:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.735 12:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.735 "name": "Existed_Raid", 00:13:13.735 "uuid": "f021fa85-5ebf-45e5-a912-29dba75455b9", 00:13:13.735 "strip_size_kb": 64, 00:13:13.735 "state": "configuring", 00:13:13.735 "raid_level": "raid0", 00:13:13.735 "superblock": true, 00:13:13.735 "num_base_bdevs": 3, 00:13:13.735 "num_base_bdevs_discovered": 0, 00:13:13.735 "num_base_bdevs_operational": 3, 00:13:13.735 "base_bdevs_list": [ 00:13:13.735 { 00:13:13.735 "name": "BaseBdev1", 00:13:13.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.735 "is_configured": false, 00:13:13.735 "data_offset": 0, 00:13:13.735 "data_size": 0 00:13:13.735 }, 00:13:13.735 { 00:13:13.735 "name": "BaseBdev2", 00:13:13.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.735 "is_configured": false, 00:13:13.735 "data_offset": 0, 00:13:13.735 "data_size": 0 00:13:13.735 }, 00:13:13.735 { 00:13:13.735 "name": "BaseBdev3", 00:13:13.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.735 "is_configured": false, 00:13:13.735 "data_offset": 0, 00:13:13.735 "data_size": 0 00:13:13.735 } 00:13:13.735 ] 00:13:13.735 }' 00:13:13.735 12:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.735 12:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.301 12:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:14.301 12:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.301 12:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.301 [2024-11-25 12:12:10.194699] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:14.301 [2024-11-25 12:12:10.194779] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:14.301 12:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.301 12:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:14.301 12:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.301 12:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.301 [2024-11-25 12:12:10.202702] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:14.301 [2024-11-25 12:12:10.202790] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:14.301 [2024-11-25 12:12:10.202806] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:14.301 [2024-11-25 12:12:10.202822] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:14.301 [2024-11-25 12:12:10.202832] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:14.301 [2024-11-25 12:12:10.202846] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:14.301 12:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.302 12:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:14.302 12:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.302 12:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.302 [2024-11-25 12:12:10.248621] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:14.302 BaseBdev1 00:13:14.302 12:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.302 12:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:14.302 12:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:14.302 12:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:14.302 12:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:14.302 12:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:14.302 12:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:14.302 12:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:14.302 12:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.302 12:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.302 12:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.302 12:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:14.302 12:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.302 12:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.302 [ 00:13:14.302 { 00:13:14.302 "name": "BaseBdev1", 00:13:14.302 "aliases": [ 00:13:14.302 "259a2d3c-d2c6-435a-b3b5-07d48b64d534" 00:13:14.302 ], 00:13:14.302 "product_name": "Malloc disk", 00:13:14.302 "block_size": 512, 00:13:14.302 "num_blocks": 65536, 00:13:14.302 "uuid": "259a2d3c-d2c6-435a-b3b5-07d48b64d534", 00:13:14.302 "assigned_rate_limits": { 00:13:14.302 "rw_ios_per_sec": 0, 00:13:14.302 "rw_mbytes_per_sec": 0, 00:13:14.302 "r_mbytes_per_sec": 0, 00:13:14.302 "w_mbytes_per_sec": 0 00:13:14.302 }, 00:13:14.302 "claimed": true, 00:13:14.302 "claim_type": "exclusive_write", 00:13:14.302 "zoned": false, 00:13:14.302 "supported_io_types": { 00:13:14.302 "read": true, 00:13:14.302 "write": true, 00:13:14.302 "unmap": true, 00:13:14.302 "flush": true, 00:13:14.302 "reset": true, 00:13:14.302 "nvme_admin": false, 00:13:14.302 "nvme_io": false, 00:13:14.302 "nvme_io_md": false, 00:13:14.302 "write_zeroes": true, 00:13:14.302 "zcopy": true, 00:13:14.302 "get_zone_info": false, 00:13:14.302 "zone_management": false, 00:13:14.302 "zone_append": false, 00:13:14.302 "compare": false, 00:13:14.302 "compare_and_write": false, 00:13:14.302 "abort": true, 00:13:14.302 "seek_hole": false, 00:13:14.302 "seek_data": false, 00:13:14.302 "copy": true, 00:13:14.302 "nvme_iov_md": false 00:13:14.302 }, 00:13:14.302 "memory_domains": [ 00:13:14.302 { 00:13:14.302 "dma_device_id": "system", 00:13:14.302 "dma_device_type": 1 00:13:14.302 }, 00:13:14.302 { 00:13:14.302 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.302 "dma_device_type": 2 00:13:14.302 } 00:13:14.302 ], 00:13:14.302 "driver_specific": {} 00:13:14.302 } 00:13:14.302 ] 00:13:14.302 12:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.302 12:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:14.302 12:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:14.302 12:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:14.302 12:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:14.302 12:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:14.302 12:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:14.302 12:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:14.302 12:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.302 12:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.302 12:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.302 12:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.302 12:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:14.302 12:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.302 12:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.302 12:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.302 12:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.302 12:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.302 "name": "Existed_Raid", 00:13:14.302 "uuid": "300d8fc9-1fcf-449a-bb9c-88eed1471344", 00:13:14.302 "strip_size_kb": 64, 00:13:14.302 "state": "configuring", 00:13:14.302 "raid_level": "raid0", 00:13:14.302 "superblock": true, 00:13:14.302 "num_base_bdevs": 3, 00:13:14.302 "num_base_bdevs_discovered": 1, 00:13:14.302 "num_base_bdevs_operational": 3, 00:13:14.302 "base_bdevs_list": [ 00:13:14.302 { 00:13:14.302 "name": "BaseBdev1", 00:13:14.302 "uuid": "259a2d3c-d2c6-435a-b3b5-07d48b64d534", 00:13:14.302 "is_configured": true, 00:13:14.302 "data_offset": 2048, 00:13:14.302 "data_size": 63488 00:13:14.302 }, 00:13:14.302 { 00:13:14.302 "name": "BaseBdev2", 00:13:14.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.302 "is_configured": false, 00:13:14.302 "data_offset": 0, 00:13:14.302 "data_size": 0 00:13:14.302 }, 00:13:14.302 { 00:13:14.302 "name": "BaseBdev3", 00:13:14.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.302 "is_configured": false, 00:13:14.302 "data_offset": 0, 00:13:14.302 "data_size": 0 00:13:14.302 } 00:13:14.302 ] 00:13:14.302 }' 00:13:14.302 12:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.302 12:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.870 12:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:14.870 12:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.870 12:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.870 [2024-11-25 12:12:10.780853] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:14.870 [2024-11-25 12:12:10.780920] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:14.870 12:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.870 12:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:14.870 12:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.870 12:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.870 [2024-11-25 12:12:10.792912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:14.870 [2024-11-25 12:12:10.795409] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:14.870 [2024-11-25 12:12:10.795465] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:14.870 [2024-11-25 12:12:10.795482] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:14.870 [2024-11-25 12:12:10.795499] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:14.870 12:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.870 12:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:14.870 12:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:14.870 12:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:14.870 12:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:14.870 12:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:14.870 12:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:14.870 12:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:14.870 12:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:14.870 12:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.870 12:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.870 12:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.870 12:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.870 12:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.870 12:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.870 12:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:14.870 12:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.870 12:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.870 12:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.870 "name": "Existed_Raid", 00:13:14.870 "uuid": "20f8c545-c653-4c8e-a762-12783b73de68", 00:13:14.870 "strip_size_kb": 64, 00:13:14.870 "state": "configuring", 00:13:14.870 "raid_level": "raid0", 00:13:14.870 "superblock": true, 00:13:14.870 "num_base_bdevs": 3, 00:13:14.870 "num_base_bdevs_discovered": 1, 00:13:14.870 "num_base_bdevs_operational": 3, 00:13:14.870 "base_bdevs_list": [ 00:13:14.870 { 00:13:14.870 "name": "BaseBdev1", 00:13:14.870 "uuid": "259a2d3c-d2c6-435a-b3b5-07d48b64d534", 00:13:14.870 "is_configured": true, 00:13:14.870 "data_offset": 2048, 00:13:14.870 "data_size": 63488 00:13:14.870 }, 00:13:14.870 { 00:13:14.870 "name": "BaseBdev2", 00:13:14.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.870 "is_configured": false, 00:13:14.870 "data_offset": 0, 00:13:14.870 "data_size": 0 00:13:14.870 }, 00:13:14.870 { 00:13:14.870 "name": "BaseBdev3", 00:13:14.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.870 "is_configured": false, 00:13:14.870 "data_offset": 0, 00:13:14.870 "data_size": 0 00:13:14.870 } 00:13:14.870 ] 00:13:14.870 }' 00:13:14.870 12:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.870 12:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.438 12:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:15.438 12:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.438 12:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.438 [2024-11-25 12:12:11.356712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:15.438 BaseBdev2 00:13:15.438 12:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.438 12:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:15.438 12:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:15.439 12:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:15.439 12:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:15.439 12:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:15.439 12:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:15.439 12:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:15.439 12:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.439 12:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.439 12:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.439 12:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:15.439 12:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.439 12:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.439 [ 00:13:15.439 { 00:13:15.439 "name": "BaseBdev2", 00:13:15.439 "aliases": [ 00:13:15.439 "293d33af-c928-4544-a43f-abbb61c48bdd" 00:13:15.439 ], 00:13:15.439 "product_name": "Malloc disk", 00:13:15.439 "block_size": 512, 00:13:15.439 "num_blocks": 65536, 00:13:15.439 "uuid": "293d33af-c928-4544-a43f-abbb61c48bdd", 00:13:15.439 "assigned_rate_limits": { 00:13:15.439 "rw_ios_per_sec": 0, 00:13:15.439 "rw_mbytes_per_sec": 0, 00:13:15.439 "r_mbytes_per_sec": 0, 00:13:15.439 "w_mbytes_per_sec": 0 00:13:15.439 }, 00:13:15.439 "claimed": true, 00:13:15.439 "claim_type": "exclusive_write", 00:13:15.439 "zoned": false, 00:13:15.439 "supported_io_types": { 00:13:15.439 "read": true, 00:13:15.439 "write": true, 00:13:15.439 "unmap": true, 00:13:15.439 "flush": true, 00:13:15.439 "reset": true, 00:13:15.439 "nvme_admin": false, 00:13:15.439 "nvme_io": false, 00:13:15.439 "nvme_io_md": false, 00:13:15.439 "write_zeroes": true, 00:13:15.439 "zcopy": true, 00:13:15.439 "get_zone_info": false, 00:13:15.439 "zone_management": false, 00:13:15.439 "zone_append": false, 00:13:15.439 "compare": false, 00:13:15.439 "compare_and_write": false, 00:13:15.439 "abort": true, 00:13:15.439 "seek_hole": false, 00:13:15.439 "seek_data": false, 00:13:15.439 "copy": true, 00:13:15.439 "nvme_iov_md": false 00:13:15.439 }, 00:13:15.439 "memory_domains": [ 00:13:15.439 { 00:13:15.439 "dma_device_id": "system", 00:13:15.439 "dma_device_type": 1 00:13:15.439 }, 00:13:15.439 { 00:13:15.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:15.439 "dma_device_type": 2 00:13:15.439 } 00:13:15.439 ], 00:13:15.439 "driver_specific": {} 00:13:15.439 } 00:13:15.439 ] 00:13:15.439 12:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.439 12:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:15.439 12:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:15.439 12:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:15.439 12:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:15.439 12:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:15.439 12:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:15.439 12:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:15.439 12:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:15.439 12:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:15.439 12:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:15.439 12:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:15.439 12:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:15.439 12:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:15.439 12:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.439 12:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.439 12:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:15.439 12:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.439 12:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.439 12:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:15.439 "name": "Existed_Raid", 00:13:15.439 "uuid": "20f8c545-c653-4c8e-a762-12783b73de68", 00:13:15.439 "strip_size_kb": 64, 00:13:15.439 "state": "configuring", 00:13:15.439 "raid_level": "raid0", 00:13:15.439 "superblock": true, 00:13:15.439 "num_base_bdevs": 3, 00:13:15.439 "num_base_bdevs_discovered": 2, 00:13:15.439 "num_base_bdevs_operational": 3, 00:13:15.439 "base_bdevs_list": [ 00:13:15.439 { 00:13:15.439 "name": "BaseBdev1", 00:13:15.439 "uuid": "259a2d3c-d2c6-435a-b3b5-07d48b64d534", 00:13:15.439 "is_configured": true, 00:13:15.439 "data_offset": 2048, 00:13:15.439 "data_size": 63488 00:13:15.439 }, 00:13:15.439 { 00:13:15.439 "name": "BaseBdev2", 00:13:15.439 "uuid": "293d33af-c928-4544-a43f-abbb61c48bdd", 00:13:15.439 "is_configured": true, 00:13:15.439 "data_offset": 2048, 00:13:15.439 "data_size": 63488 00:13:15.439 }, 00:13:15.439 { 00:13:15.439 "name": "BaseBdev3", 00:13:15.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.439 "is_configured": false, 00:13:15.439 "data_offset": 0, 00:13:15.439 "data_size": 0 00:13:15.439 } 00:13:15.439 ] 00:13:15.439 }' 00:13:15.439 12:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:15.439 12:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.008 12:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:16.008 12:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.008 12:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.008 [2024-11-25 12:12:11.916913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:16.008 [2024-11-25 12:12:11.917245] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:16.008 [2024-11-25 12:12:11.917278] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:16.008 [2024-11-25 12:12:11.917654] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:16.008 BaseBdev3 00:13:16.008 [2024-11-25 12:12:11.917855] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:16.008 [2024-11-25 12:12:11.917871] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:16.008 [2024-11-25 12:12:11.918079] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:16.008 12:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.008 12:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:16.008 12:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:16.008 12:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:16.008 12:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:16.008 12:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:16.008 12:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:16.008 12:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:16.008 12:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.008 12:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.008 12:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.008 12:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:16.008 12:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.008 12:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.008 [ 00:13:16.008 { 00:13:16.008 "name": "BaseBdev3", 00:13:16.008 "aliases": [ 00:13:16.008 "ecf78e7c-2bbd-46f0-9589-215af53e455e" 00:13:16.008 ], 00:13:16.008 "product_name": "Malloc disk", 00:13:16.008 "block_size": 512, 00:13:16.008 "num_blocks": 65536, 00:13:16.008 "uuid": "ecf78e7c-2bbd-46f0-9589-215af53e455e", 00:13:16.008 "assigned_rate_limits": { 00:13:16.008 "rw_ios_per_sec": 0, 00:13:16.008 "rw_mbytes_per_sec": 0, 00:13:16.008 "r_mbytes_per_sec": 0, 00:13:16.008 "w_mbytes_per_sec": 0 00:13:16.008 }, 00:13:16.008 "claimed": true, 00:13:16.008 "claim_type": "exclusive_write", 00:13:16.008 "zoned": false, 00:13:16.008 "supported_io_types": { 00:13:16.008 "read": true, 00:13:16.008 "write": true, 00:13:16.008 "unmap": true, 00:13:16.008 "flush": true, 00:13:16.008 "reset": true, 00:13:16.008 "nvme_admin": false, 00:13:16.008 "nvme_io": false, 00:13:16.008 "nvme_io_md": false, 00:13:16.008 "write_zeroes": true, 00:13:16.008 "zcopy": true, 00:13:16.008 "get_zone_info": false, 00:13:16.008 "zone_management": false, 00:13:16.008 "zone_append": false, 00:13:16.008 "compare": false, 00:13:16.008 "compare_and_write": false, 00:13:16.008 "abort": true, 00:13:16.008 "seek_hole": false, 00:13:16.008 "seek_data": false, 00:13:16.008 "copy": true, 00:13:16.008 "nvme_iov_md": false 00:13:16.008 }, 00:13:16.008 "memory_domains": [ 00:13:16.008 { 00:13:16.008 "dma_device_id": "system", 00:13:16.008 "dma_device_type": 1 00:13:16.008 }, 00:13:16.008 { 00:13:16.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:16.008 "dma_device_type": 2 00:13:16.008 } 00:13:16.008 ], 00:13:16.008 "driver_specific": {} 00:13:16.008 } 00:13:16.008 ] 00:13:16.008 12:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.008 12:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:16.008 12:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:16.008 12:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:16.008 12:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:13:16.008 12:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:16.008 12:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:16.008 12:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:16.008 12:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:16.008 12:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:16.008 12:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.008 12:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.008 12:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.008 12:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.008 12:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.008 12:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:16.008 12:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.008 12:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.008 12:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.008 12:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.008 "name": "Existed_Raid", 00:13:16.008 "uuid": "20f8c545-c653-4c8e-a762-12783b73de68", 00:13:16.008 "strip_size_kb": 64, 00:13:16.008 "state": "online", 00:13:16.008 "raid_level": "raid0", 00:13:16.008 "superblock": true, 00:13:16.008 "num_base_bdevs": 3, 00:13:16.008 "num_base_bdevs_discovered": 3, 00:13:16.008 "num_base_bdevs_operational": 3, 00:13:16.008 "base_bdevs_list": [ 00:13:16.008 { 00:13:16.008 "name": "BaseBdev1", 00:13:16.008 "uuid": "259a2d3c-d2c6-435a-b3b5-07d48b64d534", 00:13:16.008 "is_configured": true, 00:13:16.008 "data_offset": 2048, 00:13:16.008 "data_size": 63488 00:13:16.008 }, 00:13:16.008 { 00:13:16.008 "name": "BaseBdev2", 00:13:16.008 "uuid": "293d33af-c928-4544-a43f-abbb61c48bdd", 00:13:16.008 "is_configured": true, 00:13:16.008 "data_offset": 2048, 00:13:16.008 "data_size": 63488 00:13:16.008 }, 00:13:16.008 { 00:13:16.008 "name": "BaseBdev3", 00:13:16.008 "uuid": "ecf78e7c-2bbd-46f0-9589-215af53e455e", 00:13:16.008 "is_configured": true, 00:13:16.008 "data_offset": 2048, 00:13:16.008 "data_size": 63488 00:13:16.008 } 00:13:16.008 ] 00:13:16.008 }' 00:13:16.008 12:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.008 12:12:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.576 12:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:16.576 12:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:16.576 12:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:16.576 12:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:16.576 12:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:16.576 12:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:16.576 12:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:16.576 12:12:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.576 12:12:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.576 12:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:16.576 [2024-11-25 12:12:12.473533] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:16.576 12:12:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.577 12:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:16.577 "name": "Existed_Raid", 00:13:16.577 "aliases": [ 00:13:16.577 "20f8c545-c653-4c8e-a762-12783b73de68" 00:13:16.577 ], 00:13:16.577 "product_name": "Raid Volume", 00:13:16.577 "block_size": 512, 00:13:16.577 "num_blocks": 190464, 00:13:16.577 "uuid": "20f8c545-c653-4c8e-a762-12783b73de68", 00:13:16.577 "assigned_rate_limits": { 00:13:16.577 "rw_ios_per_sec": 0, 00:13:16.577 "rw_mbytes_per_sec": 0, 00:13:16.577 "r_mbytes_per_sec": 0, 00:13:16.577 "w_mbytes_per_sec": 0 00:13:16.577 }, 00:13:16.577 "claimed": false, 00:13:16.577 "zoned": false, 00:13:16.577 "supported_io_types": { 00:13:16.577 "read": true, 00:13:16.577 "write": true, 00:13:16.577 "unmap": true, 00:13:16.577 "flush": true, 00:13:16.577 "reset": true, 00:13:16.577 "nvme_admin": false, 00:13:16.577 "nvme_io": false, 00:13:16.577 "nvme_io_md": false, 00:13:16.577 "write_zeroes": true, 00:13:16.577 "zcopy": false, 00:13:16.577 "get_zone_info": false, 00:13:16.577 "zone_management": false, 00:13:16.577 "zone_append": false, 00:13:16.577 "compare": false, 00:13:16.577 "compare_and_write": false, 00:13:16.577 "abort": false, 00:13:16.577 "seek_hole": false, 00:13:16.577 "seek_data": false, 00:13:16.577 "copy": false, 00:13:16.577 "nvme_iov_md": false 00:13:16.577 }, 00:13:16.577 "memory_domains": [ 00:13:16.577 { 00:13:16.577 "dma_device_id": "system", 00:13:16.577 "dma_device_type": 1 00:13:16.577 }, 00:13:16.577 { 00:13:16.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:16.577 "dma_device_type": 2 00:13:16.577 }, 00:13:16.577 { 00:13:16.577 "dma_device_id": "system", 00:13:16.577 "dma_device_type": 1 00:13:16.577 }, 00:13:16.577 { 00:13:16.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:16.577 "dma_device_type": 2 00:13:16.577 }, 00:13:16.577 { 00:13:16.577 "dma_device_id": "system", 00:13:16.577 "dma_device_type": 1 00:13:16.577 }, 00:13:16.577 { 00:13:16.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:16.577 "dma_device_type": 2 00:13:16.577 } 00:13:16.577 ], 00:13:16.577 "driver_specific": { 00:13:16.577 "raid": { 00:13:16.577 "uuid": "20f8c545-c653-4c8e-a762-12783b73de68", 00:13:16.577 "strip_size_kb": 64, 00:13:16.577 "state": "online", 00:13:16.577 "raid_level": "raid0", 00:13:16.577 "superblock": true, 00:13:16.577 "num_base_bdevs": 3, 00:13:16.577 "num_base_bdevs_discovered": 3, 00:13:16.577 "num_base_bdevs_operational": 3, 00:13:16.577 "base_bdevs_list": [ 00:13:16.577 { 00:13:16.577 "name": "BaseBdev1", 00:13:16.577 "uuid": "259a2d3c-d2c6-435a-b3b5-07d48b64d534", 00:13:16.577 "is_configured": true, 00:13:16.577 "data_offset": 2048, 00:13:16.577 "data_size": 63488 00:13:16.577 }, 00:13:16.577 { 00:13:16.577 "name": "BaseBdev2", 00:13:16.577 "uuid": "293d33af-c928-4544-a43f-abbb61c48bdd", 00:13:16.577 "is_configured": true, 00:13:16.577 "data_offset": 2048, 00:13:16.577 "data_size": 63488 00:13:16.577 }, 00:13:16.577 { 00:13:16.577 "name": "BaseBdev3", 00:13:16.577 "uuid": "ecf78e7c-2bbd-46f0-9589-215af53e455e", 00:13:16.577 "is_configured": true, 00:13:16.577 "data_offset": 2048, 00:13:16.577 "data_size": 63488 00:13:16.577 } 00:13:16.577 ] 00:13:16.577 } 00:13:16.577 } 00:13:16.577 }' 00:13:16.577 12:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:16.577 12:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:16.577 BaseBdev2 00:13:16.577 BaseBdev3' 00:13:16.577 12:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:16.577 12:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:16.577 12:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:16.577 12:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:16.577 12:12:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.577 12:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:16.577 12:12:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.577 12:12:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.836 12:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:16.836 12:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:16.836 12:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:16.836 12:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:16.836 12:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:16.836 12:12:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.836 12:12:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.836 12:12:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.836 12:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:16.836 12:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:16.836 12:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:16.836 12:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:16.837 12:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:16.837 12:12:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.837 12:12:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.837 12:12:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.837 12:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:16.837 12:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:16.837 12:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:16.837 12:12:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.837 12:12:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.837 [2024-11-25 12:12:12.781246] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:16.837 [2024-11-25 12:12:12.781286] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:16.837 [2024-11-25 12:12:12.781377] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:16.837 12:12:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.837 12:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:16.837 12:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:13:16.837 12:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:16.837 12:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:13:16.837 12:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:13:16.837 12:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:13:16.837 12:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:16.837 12:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:13:16.837 12:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:16.837 12:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:16.837 12:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:16.837 12:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.837 12:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.837 12:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.837 12:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.837 12:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.837 12:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:16.837 12:12:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.837 12:12:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.837 12:12:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.095 12:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.095 "name": "Existed_Raid", 00:13:17.095 "uuid": "20f8c545-c653-4c8e-a762-12783b73de68", 00:13:17.095 "strip_size_kb": 64, 00:13:17.095 "state": "offline", 00:13:17.095 "raid_level": "raid0", 00:13:17.095 "superblock": true, 00:13:17.095 "num_base_bdevs": 3, 00:13:17.095 "num_base_bdevs_discovered": 2, 00:13:17.095 "num_base_bdevs_operational": 2, 00:13:17.095 "base_bdevs_list": [ 00:13:17.095 { 00:13:17.095 "name": null, 00:13:17.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.095 "is_configured": false, 00:13:17.095 "data_offset": 0, 00:13:17.095 "data_size": 63488 00:13:17.095 }, 00:13:17.095 { 00:13:17.095 "name": "BaseBdev2", 00:13:17.095 "uuid": "293d33af-c928-4544-a43f-abbb61c48bdd", 00:13:17.095 "is_configured": true, 00:13:17.095 "data_offset": 2048, 00:13:17.095 "data_size": 63488 00:13:17.095 }, 00:13:17.095 { 00:13:17.095 "name": "BaseBdev3", 00:13:17.095 "uuid": "ecf78e7c-2bbd-46f0-9589-215af53e455e", 00:13:17.095 "is_configured": true, 00:13:17.095 "data_offset": 2048, 00:13:17.095 "data_size": 63488 00:13:17.095 } 00:13:17.095 ] 00:13:17.095 }' 00:13:17.095 12:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.095 12:12:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.353 12:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:17.353 12:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:17.353 12:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.353 12:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.353 12:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:17.353 12:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.353 12:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.353 12:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:17.353 12:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:17.353 12:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:17.353 12:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.354 12:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.613 [2024-11-25 12:12:13.448215] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:17.613 12:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.613 12:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:17.613 12:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:17.613 12:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.613 12:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:17.613 12:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.613 12:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.613 12:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.613 12:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:17.613 12:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:17.613 12:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:17.613 12:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.613 12:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.613 [2024-11-25 12:12:13.590734] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:17.613 [2024-11-25 12:12:13.590803] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:17.613 12:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.613 12:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:17.613 12:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:17.613 12:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:17.613 12:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.613 12:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.613 12:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.613 12:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.873 12:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:17.873 12:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:17.873 12:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:13:17.873 12:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:17.873 12:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:17.873 12:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:17.873 12:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.873 12:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.873 BaseBdev2 00:13:17.873 12:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.873 12:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:17.873 12:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:17.873 12:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:17.873 12:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:17.873 12:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:17.873 12:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:17.873 12:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:17.873 12:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.873 12:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.873 12:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.873 12:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:17.873 12:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.873 12:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.873 [ 00:13:17.873 { 00:13:17.873 "name": "BaseBdev2", 00:13:17.873 "aliases": [ 00:13:17.873 "541a78a8-f83f-41f1-bc4f-a1d38fb6f6e4" 00:13:17.873 ], 00:13:17.873 "product_name": "Malloc disk", 00:13:17.873 "block_size": 512, 00:13:17.873 "num_blocks": 65536, 00:13:17.873 "uuid": "541a78a8-f83f-41f1-bc4f-a1d38fb6f6e4", 00:13:17.873 "assigned_rate_limits": { 00:13:17.873 "rw_ios_per_sec": 0, 00:13:17.873 "rw_mbytes_per_sec": 0, 00:13:17.873 "r_mbytes_per_sec": 0, 00:13:17.873 "w_mbytes_per_sec": 0 00:13:17.873 }, 00:13:17.873 "claimed": false, 00:13:17.873 "zoned": false, 00:13:17.873 "supported_io_types": { 00:13:17.873 "read": true, 00:13:17.873 "write": true, 00:13:17.873 "unmap": true, 00:13:17.873 "flush": true, 00:13:17.873 "reset": true, 00:13:17.873 "nvme_admin": false, 00:13:17.873 "nvme_io": false, 00:13:17.873 "nvme_io_md": false, 00:13:17.873 "write_zeroes": true, 00:13:17.873 "zcopy": true, 00:13:17.873 "get_zone_info": false, 00:13:17.873 "zone_management": false, 00:13:17.873 "zone_append": false, 00:13:17.873 "compare": false, 00:13:17.873 "compare_and_write": false, 00:13:17.873 "abort": true, 00:13:17.873 "seek_hole": false, 00:13:17.873 "seek_data": false, 00:13:17.873 "copy": true, 00:13:17.873 "nvme_iov_md": false 00:13:17.873 }, 00:13:17.873 "memory_domains": [ 00:13:17.873 { 00:13:17.873 "dma_device_id": "system", 00:13:17.873 "dma_device_type": 1 00:13:17.873 }, 00:13:17.873 { 00:13:17.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:17.873 "dma_device_type": 2 00:13:17.873 } 00:13:17.873 ], 00:13:17.873 "driver_specific": {} 00:13:17.873 } 00:13:17.873 ] 00:13:17.873 12:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.873 12:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:17.873 12:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:17.873 12:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:17.873 12:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:17.873 12:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.873 12:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.873 BaseBdev3 00:13:17.873 12:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.873 12:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:17.873 12:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:17.873 12:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:17.873 12:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:17.873 12:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:17.873 12:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:17.873 12:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:17.873 12:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.873 12:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.873 12:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.873 12:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:17.873 12:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.873 12:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.873 [ 00:13:17.873 { 00:13:17.873 "name": "BaseBdev3", 00:13:17.873 "aliases": [ 00:13:17.873 "71a3255f-2d31-4095-8e4d-b47bb61b1308" 00:13:17.873 ], 00:13:17.873 "product_name": "Malloc disk", 00:13:17.873 "block_size": 512, 00:13:17.873 "num_blocks": 65536, 00:13:17.873 "uuid": "71a3255f-2d31-4095-8e4d-b47bb61b1308", 00:13:17.873 "assigned_rate_limits": { 00:13:17.873 "rw_ios_per_sec": 0, 00:13:17.873 "rw_mbytes_per_sec": 0, 00:13:17.873 "r_mbytes_per_sec": 0, 00:13:17.873 "w_mbytes_per_sec": 0 00:13:17.873 }, 00:13:17.874 "claimed": false, 00:13:17.874 "zoned": false, 00:13:17.874 "supported_io_types": { 00:13:17.874 "read": true, 00:13:17.874 "write": true, 00:13:17.874 "unmap": true, 00:13:17.874 "flush": true, 00:13:17.874 "reset": true, 00:13:17.874 "nvme_admin": false, 00:13:17.874 "nvme_io": false, 00:13:17.874 "nvme_io_md": false, 00:13:17.874 "write_zeroes": true, 00:13:17.874 "zcopy": true, 00:13:17.874 "get_zone_info": false, 00:13:17.874 "zone_management": false, 00:13:17.874 "zone_append": false, 00:13:17.874 "compare": false, 00:13:17.874 "compare_and_write": false, 00:13:17.874 "abort": true, 00:13:17.874 "seek_hole": false, 00:13:17.874 "seek_data": false, 00:13:17.874 "copy": true, 00:13:17.874 "nvme_iov_md": false 00:13:17.874 }, 00:13:17.874 "memory_domains": [ 00:13:17.874 { 00:13:17.874 "dma_device_id": "system", 00:13:17.874 "dma_device_type": 1 00:13:17.874 }, 00:13:17.874 { 00:13:17.874 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:17.874 "dma_device_type": 2 00:13:17.874 } 00:13:17.874 ], 00:13:17.874 "driver_specific": {} 00:13:17.874 } 00:13:17.874 ] 00:13:17.874 12:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.874 12:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:17.874 12:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:17.874 12:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:17.874 12:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:17.874 12:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.874 12:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.874 [2024-11-25 12:12:13.890693] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:17.874 [2024-11-25 12:12:13.890749] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:17.874 [2024-11-25 12:12:13.890782] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:17.874 [2024-11-25 12:12:13.893176] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:17.874 12:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.874 12:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:17.874 12:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:17.874 12:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:17.874 12:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:17.874 12:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:17.874 12:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:17.874 12:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.874 12:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.874 12:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.874 12:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.874 12:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.874 12:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:17.874 12:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.874 12:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.874 12:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.874 12:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.874 "name": "Existed_Raid", 00:13:17.874 "uuid": "d2c85578-05f2-4a37-a444-f235902142e7", 00:13:17.874 "strip_size_kb": 64, 00:13:17.874 "state": "configuring", 00:13:17.874 "raid_level": "raid0", 00:13:17.874 "superblock": true, 00:13:17.874 "num_base_bdevs": 3, 00:13:17.874 "num_base_bdevs_discovered": 2, 00:13:17.874 "num_base_bdevs_operational": 3, 00:13:17.874 "base_bdevs_list": [ 00:13:17.874 { 00:13:17.874 "name": "BaseBdev1", 00:13:17.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.874 "is_configured": false, 00:13:17.874 "data_offset": 0, 00:13:17.874 "data_size": 0 00:13:17.874 }, 00:13:17.874 { 00:13:17.874 "name": "BaseBdev2", 00:13:17.874 "uuid": "541a78a8-f83f-41f1-bc4f-a1d38fb6f6e4", 00:13:17.874 "is_configured": true, 00:13:17.874 "data_offset": 2048, 00:13:17.874 "data_size": 63488 00:13:17.874 }, 00:13:17.874 { 00:13:17.874 "name": "BaseBdev3", 00:13:17.874 "uuid": "71a3255f-2d31-4095-8e4d-b47bb61b1308", 00:13:17.874 "is_configured": true, 00:13:17.874 "data_offset": 2048, 00:13:17.874 "data_size": 63488 00:13:17.874 } 00:13:17.874 ] 00:13:17.874 }' 00:13:17.874 12:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.874 12:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.443 12:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:18.443 12:12:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.443 12:12:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.443 [2024-11-25 12:12:14.382842] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:18.443 12:12:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.443 12:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:18.443 12:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:18.443 12:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:18.443 12:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:18.443 12:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:18.443 12:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:18.443 12:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.443 12:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.443 12:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.443 12:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.443 12:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.443 12:12:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.443 12:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:18.443 12:12:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.443 12:12:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.443 12:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.443 "name": "Existed_Raid", 00:13:18.443 "uuid": "d2c85578-05f2-4a37-a444-f235902142e7", 00:13:18.443 "strip_size_kb": 64, 00:13:18.443 "state": "configuring", 00:13:18.443 "raid_level": "raid0", 00:13:18.443 "superblock": true, 00:13:18.443 "num_base_bdevs": 3, 00:13:18.443 "num_base_bdevs_discovered": 1, 00:13:18.443 "num_base_bdevs_operational": 3, 00:13:18.443 "base_bdevs_list": [ 00:13:18.443 { 00:13:18.443 "name": "BaseBdev1", 00:13:18.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.443 "is_configured": false, 00:13:18.443 "data_offset": 0, 00:13:18.443 "data_size": 0 00:13:18.443 }, 00:13:18.443 { 00:13:18.443 "name": null, 00:13:18.443 "uuid": "541a78a8-f83f-41f1-bc4f-a1d38fb6f6e4", 00:13:18.443 "is_configured": false, 00:13:18.443 "data_offset": 0, 00:13:18.443 "data_size": 63488 00:13:18.443 }, 00:13:18.443 { 00:13:18.443 "name": "BaseBdev3", 00:13:18.443 "uuid": "71a3255f-2d31-4095-8e4d-b47bb61b1308", 00:13:18.443 "is_configured": true, 00:13:18.443 "data_offset": 2048, 00:13:18.443 "data_size": 63488 00:13:18.443 } 00:13:18.443 ] 00:13:18.443 }' 00:13:18.443 12:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.443 12:12:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.109 12:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:19.109 12:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.109 12:12:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.109 12:12:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.109 12:12:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.109 12:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:19.109 12:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:19.109 12:12:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.109 12:12:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.109 [2024-11-25 12:12:14.986767] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:19.109 BaseBdev1 00:13:19.109 12:12:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.109 12:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:19.109 12:12:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:19.109 12:12:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:19.109 12:12:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:19.109 12:12:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:19.109 12:12:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:19.109 12:12:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:19.109 12:12:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.109 12:12:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.109 12:12:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.109 12:12:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:19.109 12:12:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.109 12:12:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.109 [ 00:13:19.109 { 00:13:19.109 "name": "BaseBdev1", 00:13:19.109 "aliases": [ 00:13:19.109 "82871c1b-709d-4689-83a4-ac8bd3750490" 00:13:19.109 ], 00:13:19.109 "product_name": "Malloc disk", 00:13:19.109 "block_size": 512, 00:13:19.109 "num_blocks": 65536, 00:13:19.109 "uuid": "82871c1b-709d-4689-83a4-ac8bd3750490", 00:13:19.109 "assigned_rate_limits": { 00:13:19.109 "rw_ios_per_sec": 0, 00:13:19.109 "rw_mbytes_per_sec": 0, 00:13:19.109 "r_mbytes_per_sec": 0, 00:13:19.109 "w_mbytes_per_sec": 0 00:13:19.109 }, 00:13:19.109 "claimed": true, 00:13:19.109 "claim_type": "exclusive_write", 00:13:19.109 "zoned": false, 00:13:19.109 "supported_io_types": { 00:13:19.109 "read": true, 00:13:19.109 "write": true, 00:13:19.109 "unmap": true, 00:13:19.109 "flush": true, 00:13:19.109 "reset": true, 00:13:19.109 "nvme_admin": false, 00:13:19.109 "nvme_io": false, 00:13:19.109 "nvme_io_md": false, 00:13:19.109 "write_zeroes": true, 00:13:19.109 "zcopy": true, 00:13:19.109 "get_zone_info": false, 00:13:19.109 "zone_management": false, 00:13:19.109 "zone_append": false, 00:13:19.109 "compare": false, 00:13:19.109 "compare_and_write": false, 00:13:19.109 "abort": true, 00:13:19.109 "seek_hole": false, 00:13:19.109 "seek_data": false, 00:13:19.109 "copy": true, 00:13:19.109 "nvme_iov_md": false 00:13:19.109 }, 00:13:19.109 "memory_domains": [ 00:13:19.109 { 00:13:19.109 "dma_device_id": "system", 00:13:19.109 "dma_device_type": 1 00:13:19.109 }, 00:13:19.109 { 00:13:19.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.109 "dma_device_type": 2 00:13:19.109 } 00:13:19.109 ], 00:13:19.109 "driver_specific": {} 00:13:19.109 } 00:13:19.109 ] 00:13:19.109 12:12:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.109 12:12:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:19.109 12:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:19.109 12:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:19.109 12:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:19.109 12:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:19.109 12:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:19.109 12:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:19.109 12:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.109 12:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.109 12:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.109 12:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.109 12:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.109 12:12:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.110 12:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:19.110 12:12:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.110 12:12:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.110 12:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.110 "name": "Existed_Raid", 00:13:19.110 "uuid": "d2c85578-05f2-4a37-a444-f235902142e7", 00:13:19.110 "strip_size_kb": 64, 00:13:19.110 "state": "configuring", 00:13:19.110 "raid_level": "raid0", 00:13:19.110 "superblock": true, 00:13:19.110 "num_base_bdevs": 3, 00:13:19.110 "num_base_bdevs_discovered": 2, 00:13:19.110 "num_base_bdevs_operational": 3, 00:13:19.110 "base_bdevs_list": [ 00:13:19.110 { 00:13:19.110 "name": "BaseBdev1", 00:13:19.110 "uuid": "82871c1b-709d-4689-83a4-ac8bd3750490", 00:13:19.110 "is_configured": true, 00:13:19.110 "data_offset": 2048, 00:13:19.110 "data_size": 63488 00:13:19.110 }, 00:13:19.110 { 00:13:19.110 "name": null, 00:13:19.110 "uuid": "541a78a8-f83f-41f1-bc4f-a1d38fb6f6e4", 00:13:19.110 "is_configured": false, 00:13:19.110 "data_offset": 0, 00:13:19.110 "data_size": 63488 00:13:19.110 }, 00:13:19.110 { 00:13:19.110 "name": "BaseBdev3", 00:13:19.110 "uuid": "71a3255f-2d31-4095-8e4d-b47bb61b1308", 00:13:19.110 "is_configured": true, 00:13:19.110 "data_offset": 2048, 00:13:19.110 "data_size": 63488 00:13:19.110 } 00:13:19.110 ] 00:13:19.110 }' 00:13:19.110 12:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.110 12:12:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.679 12:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:19.679 12:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.679 12:12:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.679 12:12:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.679 12:12:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.679 12:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:19.679 12:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:19.679 12:12:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.679 12:12:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.679 [2024-11-25 12:12:15.550959] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:19.679 12:12:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.679 12:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:19.679 12:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:19.679 12:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:19.679 12:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:19.679 12:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:19.679 12:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:19.679 12:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.679 12:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.679 12:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.679 12:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.679 12:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.679 12:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:19.679 12:12:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.679 12:12:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.679 12:12:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.679 12:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.679 "name": "Existed_Raid", 00:13:19.679 "uuid": "d2c85578-05f2-4a37-a444-f235902142e7", 00:13:19.679 "strip_size_kb": 64, 00:13:19.679 "state": "configuring", 00:13:19.679 "raid_level": "raid0", 00:13:19.679 "superblock": true, 00:13:19.679 "num_base_bdevs": 3, 00:13:19.679 "num_base_bdevs_discovered": 1, 00:13:19.679 "num_base_bdevs_operational": 3, 00:13:19.679 "base_bdevs_list": [ 00:13:19.679 { 00:13:19.679 "name": "BaseBdev1", 00:13:19.679 "uuid": "82871c1b-709d-4689-83a4-ac8bd3750490", 00:13:19.679 "is_configured": true, 00:13:19.679 "data_offset": 2048, 00:13:19.679 "data_size": 63488 00:13:19.679 }, 00:13:19.679 { 00:13:19.679 "name": null, 00:13:19.679 "uuid": "541a78a8-f83f-41f1-bc4f-a1d38fb6f6e4", 00:13:19.679 "is_configured": false, 00:13:19.679 "data_offset": 0, 00:13:19.679 "data_size": 63488 00:13:19.679 }, 00:13:19.679 { 00:13:19.679 "name": null, 00:13:19.679 "uuid": "71a3255f-2d31-4095-8e4d-b47bb61b1308", 00:13:19.679 "is_configured": false, 00:13:19.679 "data_offset": 0, 00:13:19.679 "data_size": 63488 00:13:19.679 } 00:13:19.679 ] 00:13:19.679 }' 00:13:19.679 12:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.679 12:12:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.247 12:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:20.247 12:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.247 12:12:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.247 12:12:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.247 12:12:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.247 12:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:20.247 12:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:20.247 12:12:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.247 12:12:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.247 [2024-11-25 12:12:16.087133] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:20.247 12:12:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.247 12:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:20.247 12:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:20.247 12:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:20.247 12:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:20.247 12:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:20.247 12:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:20.247 12:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:20.247 12:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:20.247 12:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:20.247 12:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:20.247 12:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.247 12:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:20.247 12:12:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.247 12:12:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.247 12:12:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.247 12:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:20.247 "name": "Existed_Raid", 00:13:20.247 "uuid": "d2c85578-05f2-4a37-a444-f235902142e7", 00:13:20.247 "strip_size_kb": 64, 00:13:20.247 "state": "configuring", 00:13:20.247 "raid_level": "raid0", 00:13:20.247 "superblock": true, 00:13:20.247 "num_base_bdevs": 3, 00:13:20.247 "num_base_bdevs_discovered": 2, 00:13:20.247 "num_base_bdevs_operational": 3, 00:13:20.247 "base_bdevs_list": [ 00:13:20.247 { 00:13:20.247 "name": "BaseBdev1", 00:13:20.247 "uuid": "82871c1b-709d-4689-83a4-ac8bd3750490", 00:13:20.247 "is_configured": true, 00:13:20.247 "data_offset": 2048, 00:13:20.247 "data_size": 63488 00:13:20.247 }, 00:13:20.247 { 00:13:20.247 "name": null, 00:13:20.247 "uuid": "541a78a8-f83f-41f1-bc4f-a1d38fb6f6e4", 00:13:20.247 "is_configured": false, 00:13:20.247 "data_offset": 0, 00:13:20.247 "data_size": 63488 00:13:20.247 }, 00:13:20.247 { 00:13:20.247 "name": "BaseBdev3", 00:13:20.247 "uuid": "71a3255f-2d31-4095-8e4d-b47bb61b1308", 00:13:20.247 "is_configured": true, 00:13:20.247 "data_offset": 2048, 00:13:20.247 "data_size": 63488 00:13:20.247 } 00:13:20.247 ] 00:13:20.247 }' 00:13:20.247 12:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:20.247 12:12:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.815 12:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:20.815 12:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.815 12:12:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.815 12:12:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.815 12:12:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.815 12:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:20.815 12:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:20.815 12:12:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.815 12:12:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.815 [2024-11-25 12:12:16.671310] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:20.815 12:12:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.815 12:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:20.815 12:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:20.815 12:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:20.815 12:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:20.815 12:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:20.815 12:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:20.815 12:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:20.815 12:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:20.815 12:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:20.815 12:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:20.815 12:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.815 12:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:20.815 12:12:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.815 12:12:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.815 12:12:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.815 12:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:20.815 "name": "Existed_Raid", 00:13:20.815 "uuid": "d2c85578-05f2-4a37-a444-f235902142e7", 00:13:20.815 "strip_size_kb": 64, 00:13:20.815 "state": "configuring", 00:13:20.815 "raid_level": "raid0", 00:13:20.815 "superblock": true, 00:13:20.815 "num_base_bdevs": 3, 00:13:20.815 "num_base_bdevs_discovered": 1, 00:13:20.815 "num_base_bdevs_operational": 3, 00:13:20.815 "base_bdevs_list": [ 00:13:20.815 { 00:13:20.815 "name": null, 00:13:20.815 "uuid": "82871c1b-709d-4689-83a4-ac8bd3750490", 00:13:20.815 "is_configured": false, 00:13:20.815 "data_offset": 0, 00:13:20.815 "data_size": 63488 00:13:20.815 }, 00:13:20.815 { 00:13:20.815 "name": null, 00:13:20.815 "uuid": "541a78a8-f83f-41f1-bc4f-a1d38fb6f6e4", 00:13:20.815 "is_configured": false, 00:13:20.815 "data_offset": 0, 00:13:20.815 "data_size": 63488 00:13:20.815 }, 00:13:20.815 { 00:13:20.815 "name": "BaseBdev3", 00:13:20.815 "uuid": "71a3255f-2d31-4095-8e4d-b47bb61b1308", 00:13:20.815 "is_configured": true, 00:13:20.815 "data_offset": 2048, 00:13:20.815 "data_size": 63488 00:13:20.815 } 00:13:20.815 ] 00:13:20.815 }' 00:13:20.815 12:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:20.815 12:12:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.383 12:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.383 12:12:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.383 12:12:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.383 12:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:21.383 12:12:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.383 12:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:21.383 12:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:21.383 12:12:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.383 12:12:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.383 [2024-11-25 12:12:17.331162] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:21.383 12:12:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.383 12:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:21.383 12:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:21.383 12:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:21.383 12:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:21.383 12:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:21.383 12:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:21.383 12:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.383 12:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.383 12:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.383 12:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.383 12:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.383 12:12:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.383 12:12:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.383 12:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:21.383 12:12:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.383 12:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.383 "name": "Existed_Raid", 00:13:21.383 "uuid": "d2c85578-05f2-4a37-a444-f235902142e7", 00:13:21.383 "strip_size_kb": 64, 00:13:21.383 "state": "configuring", 00:13:21.383 "raid_level": "raid0", 00:13:21.383 "superblock": true, 00:13:21.383 "num_base_bdevs": 3, 00:13:21.383 "num_base_bdevs_discovered": 2, 00:13:21.383 "num_base_bdevs_operational": 3, 00:13:21.383 "base_bdevs_list": [ 00:13:21.383 { 00:13:21.383 "name": null, 00:13:21.383 "uuid": "82871c1b-709d-4689-83a4-ac8bd3750490", 00:13:21.383 "is_configured": false, 00:13:21.383 "data_offset": 0, 00:13:21.383 "data_size": 63488 00:13:21.383 }, 00:13:21.383 { 00:13:21.383 "name": "BaseBdev2", 00:13:21.383 "uuid": "541a78a8-f83f-41f1-bc4f-a1d38fb6f6e4", 00:13:21.383 "is_configured": true, 00:13:21.383 "data_offset": 2048, 00:13:21.383 "data_size": 63488 00:13:21.383 }, 00:13:21.383 { 00:13:21.383 "name": "BaseBdev3", 00:13:21.383 "uuid": "71a3255f-2d31-4095-8e4d-b47bb61b1308", 00:13:21.383 "is_configured": true, 00:13:21.383 "data_offset": 2048, 00:13:21.383 "data_size": 63488 00:13:21.383 } 00:13:21.383 ] 00:13:21.383 }' 00:13:21.383 12:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.383 12:12:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.951 12:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:21.951 12:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.951 12:12:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.951 12:12:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.951 12:12:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.951 12:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:21.951 12:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.951 12:12:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.951 12:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:21.951 12:12:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.951 12:12:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.951 12:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 82871c1b-709d-4689-83a4-ac8bd3750490 00:13:21.951 12:12:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.951 12:12:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.951 [2024-11-25 12:12:18.005731] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:21.951 [2024-11-25 12:12:18.005995] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:21.951 [2024-11-25 12:12:18.006042] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:21.951 NewBaseBdev 00:13:21.951 [2024-11-25 12:12:18.006368] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:21.951 [2024-11-25 12:12:18.006566] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:21.951 [2024-11-25 12:12:18.006583] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:21.951 [2024-11-25 12:12:18.006753] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:21.951 12:12:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.951 12:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:21.951 12:12:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:21.951 12:12:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:21.951 12:12:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:21.951 12:12:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:21.951 12:12:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:21.951 12:12:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:21.951 12:12:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.951 12:12:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.951 12:12:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.951 12:12:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:21.951 12:12:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.951 12:12:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.951 [ 00:13:21.951 { 00:13:21.951 "name": "NewBaseBdev", 00:13:21.951 "aliases": [ 00:13:21.951 "82871c1b-709d-4689-83a4-ac8bd3750490" 00:13:21.951 ], 00:13:21.951 "product_name": "Malloc disk", 00:13:21.951 "block_size": 512, 00:13:21.951 "num_blocks": 65536, 00:13:21.951 "uuid": "82871c1b-709d-4689-83a4-ac8bd3750490", 00:13:21.951 "assigned_rate_limits": { 00:13:21.951 "rw_ios_per_sec": 0, 00:13:21.951 "rw_mbytes_per_sec": 0, 00:13:21.951 "r_mbytes_per_sec": 0, 00:13:21.951 "w_mbytes_per_sec": 0 00:13:21.951 }, 00:13:21.951 "claimed": true, 00:13:21.951 "claim_type": "exclusive_write", 00:13:21.951 "zoned": false, 00:13:21.951 "supported_io_types": { 00:13:21.951 "read": true, 00:13:21.951 "write": true, 00:13:21.951 "unmap": true, 00:13:21.951 "flush": true, 00:13:21.951 "reset": true, 00:13:21.951 "nvme_admin": false, 00:13:21.951 "nvme_io": false, 00:13:21.951 "nvme_io_md": false, 00:13:21.951 "write_zeroes": true, 00:13:21.951 "zcopy": true, 00:13:21.951 "get_zone_info": false, 00:13:21.951 "zone_management": false, 00:13:21.951 "zone_append": false, 00:13:21.951 "compare": false, 00:13:21.951 "compare_and_write": false, 00:13:21.951 "abort": true, 00:13:21.951 "seek_hole": false, 00:13:21.951 "seek_data": false, 00:13:21.951 "copy": true, 00:13:21.951 "nvme_iov_md": false 00:13:21.951 }, 00:13:21.951 "memory_domains": [ 00:13:21.951 { 00:13:21.951 "dma_device_id": "system", 00:13:21.951 "dma_device_type": 1 00:13:21.951 }, 00:13:21.951 { 00:13:21.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.951 "dma_device_type": 2 00:13:22.210 } 00:13:22.210 ], 00:13:22.210 "driver_specific": {} 00:13:22.210 } 00:13:22.210 ] 00:13:22.210 12:12:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.210 12:12:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:22.210 12:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:13:22.210 12:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:22.210 12:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:22.210 12:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:22.210 12:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:22.210 12:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:22.210 12:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.210 12:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.210 12:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.210 12:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.210 12:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.210 12:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:22.210 12:12:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.210 12:12:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.210 12:12:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.210 12:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.210 "name": "Existed_Raid", 00:13:22.210 "uuid": "d2c85578-05f2-4a37-a444-f235902142e7", 00:13:22.210 "strip_size_kb": 64, 00:13:22.210 "state": "online", 00:13:22.210 "raid_level": "raid0", 00:13:22.210 "superblock": true, 00:13:22.210 "num_base_bdevs": 3, 00:13:22.210 "num_base_bdevs_discovered": 3, 00:13:22.210 "num_base_bdevs_operational": 3, 00:13:22.210 "base_bdevs_list": [ 00:13:22.210 { 00:13:22.210 "name": "NewBaseBdev", 00:13:22.210 "uuid": "82871c1b-709d-4689-83a4-ac8bd3750490", 00:13:22.210 "is_configured": true, 00:13:22.210 "data_offset": 2048, 00:13:22.210 "data_size": 63488 00:13:22.210 }, 00:13:22.210 { 00:13:22.210 "name": "BaseBdev2", 00:13:22.210 "uuid": "541a78a8-f83f-41f1-bc4f-a1d38fb6f6e4", 00:13:22.210 "is_configured": true, 00:13:22.210 "data_offset": 2048, 00:13:22.210 "data_size": 63488 00:13:22.210 }, 00:13:22.210 { 00:13:22.210 "name": "BaseBdev3", 00:13:22.210 "uuid": "71a3255f-2d31-4095-8e4d-b47bb61b1308", 00:13:22.210 "is_configured": true, 00:13:22.210 "data_offset": 2048, 00:13:22.210 "data_size": 63488 00:13:22.210 } 00:13:22.210 ] 00:13:22.210 }' 00:13:22.210 12:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.210 12:12:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.468 12:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:22.468 12:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:22.468 12:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:22.468 12:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:22.468 12:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:22.468 12:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:22.468 12:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:22.468 12:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:22.468 12:12:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.468 12:12:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.468 [2024-11-25 12:12:18.554317] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:22.727 12:12:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.727 12:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:22.727 "name": "Existed_Raid", 00:13:22.727 "aliases": [ 00:13:22.727 "d2c85578-05f2-4a37-a444-f235902142e7" 00:13:22.727 ], 00:13:22.727 "product_name": "Raid Volume", 00:13:22.727 "block_size": 512, 00:13:22.727 "num_blocks": 190464, 00:13:22.727 "uuid": "d2c85578-05f2-4a37-a444-f235902142e7", 00:13:22.727 "assigned_rate_limits": { 00:13:22.727 "rw_ios_per_sec": 0, 00:13:22.727 "rw_mbytes_per_sec": 0, 00:13:22.727 "r_mbytes_per_sec": 0, 00:13:22.727 "w_mbytes_per_sec": 0 00:13:22.727 }, 00:13:22.727 "claimed": false, 00:13:22.727 "zoned": false, 00:13:22.727 "supported_io_types": { 00:13:22.727 "read": true, 00:13:22.727 "write": true, 00:13:22.727 "unmap": true, 00:13:22.727 "flush": true, 00:13:22.727 "reset": true, 00:13:22.727 "nvme_admin": false, 00:13:22.727 "nvme_io": false, 00:13:22.727 "nvme_io_md": false, 00:13:22.727 "write_zeroes": true, 00:13:22.727 "zcopy": false, 00:13:22.727 "get_zone_info": false, 00:13:22.727 "zone_management": false, 00:13:22.727 "zone_append": false, 00:13:22.727 "compare": false, 00:13:22.727 "compare_and_write": false, 00:13:22.727 "abort": false, 00:13:22.727 "seek_hole": false, 00:13:22.727 "seek_data": false, 00:13:22.727 "copy": false, 00:13:22.727 "nvme_iov_md": false 00:13:22.728 }, 00:13:22.728 "memory_domains": [ 00:13:22.728 { 00:13:22.728 "dma_device_id": "system", 00:13:22.728 "dma_device_type": 1 00:13:22.728 }, 00:13:22.728 { 00:13:22.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:22.728 "dma_device_type": 2 00:13:22.728 }, 00:13:22.728 { 00:13:22.728 "dma_device_id": "system", 00:13:22.728 "dma_device_type": 1 00:13:22.728 }, 00:13:22.728 { 00:13:22.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:22.728 "dma_device_type": 2 00:13:22.728 }, 00:13:22.728 { 00:13:22.728 "dma_device_id": "system", 00:13:22.728 "dma_device_type": 1 00:13:22.728 }, 00:13:22.728 { 00:13:22.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:22.728 "dma_device_type": 2 00:13:22.728 } 00:13:22.728 ], 00:13:22.728 "driver_specific": { 00:13:22.728 "raid": { 00:13:22.728 "uuid": "d2c85578-05f2-4a37-a444-f235902142e7", 00:13:22.728 "strip_size_kb": 64, 00:13:22.728 "state": "online", 00:13:22.728 "raid_level": "raid0", 00:13:22.728 "superblock": true, 00:13:22.728 "num_base_bdevs": 3, 00:13:22.728 "num_base_bdevs_discovered": 3, 00:13:22.728 "num_base_bdevs_operational": 3, 00:13:22.728 "base_bdevs_list": [ 00:13:22.728 { 00:13:22.728 "name": "NewBaseBdev", 00:13:22.728 "uuid": "82871c1b-709d-4689-83a4-ac8bd3750490", 00:13:22.728 "is_configured": true, 00:13:22.728 "data_offset": 2048, 00:13:22.728 "data_size": 63488 00:13:22.728 }, 00:13:22.728 { 00:13:22.728 "name": "BaseBdev2", 00:13:22.728 "uuid": "541a78a8-f83f-41f1-bc4f-a1d38fb6f6e4", 00:13:22.728 "is_configured": true, 00:13:22.728 "data_offset": 2048, 00:13:22.728 "data_size": 63488 00:13:22.728 }, 00:13:22.728 { 00:13:22.728 "name": "BaseBdev3", 00:13:22.728 "uuid": "71a3255f-2d31-4095-8e4d-b47bb61b1308", 00:13:22.728 "is_configured": true, 00:13:22.728 "data_offset": 2048, 00:13:22.728 "data_size": 63488 00:13:22.728 } 00:13:22.728 ] 00:13:22.728 } 00:13:22.728 } 00:13:22.728 }' 00:13:22.728 12:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:22.728 12:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:22.728 BaseBdev2 00:13:22.728 BaseBdev3' 00:13:22.728 12:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:22.728 12:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:22.728 12:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:22.728 12:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:22.728 12:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:22.728 12:12:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.728 12:12:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.728 12:12:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.728 12:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:22.728 12:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:22.728 12:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:22.728 12:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:22.728 12:12:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.728 12:12:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.728 12:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:22.728 12:12:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.987 12:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:22.987 12:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:22.987 12:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:22.987 12:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:22.987 12:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:22.987 12:12:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.987 12:12:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.987 12:12:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.987 12:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:22.987 12:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:22.987 12:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:22.987 12:12:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.987 12:12:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.987 [2024-11-25 12:12:18.866005] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:22.987 [2024-11-25 12:12:18.866067] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:22.987 [2024-11-25 12:12:18.866170] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:22.987 [2024-11-25 12:12:18.866244] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:22.987 [2024-11-25 12:12:18.866265] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:22.987 12:12:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.987 12:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64466 00:13:22.987 12:12:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64466 ']' 00:13:22.987 12:12:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64466 00:13:22.987 12:12:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:22.987 12:12:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:22.987 12:12:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64466 00:13:22.987 killing process with pid 64466 00:13:22.987 12:12:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:22.987 12:12:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:22.987 12:12:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64466' 00:13:22.987 12:12:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64466 00:13:22.987 [2024-11-25 12:12:18.905526] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:22.987 12:12:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64466 00:13:23.245 [2024-11-25 12:12:19.179324] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:24.183 ************************************ 00:13:24.183 END TEST raid_state_function_test_sb 00:13:24.183 ************************************ 00:13:24.183 12:12:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:24.183 00:13:24.183 real 0m11.562s 00:13:24.183 user 0m19.172s 00:13:24.183 sys 0m1.549s 00:13:24.183 12:12:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:24.183 12:12:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.183 12:12:20 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:13:24.183 12:12:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:24.183 12:12:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:24.183 12:12:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:24.183 ************************************ 00:13:24.183 START TEST raid_superblock_test 00:13:24.183 ************************************ 00:13:24.183 12:12:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:13:24.183 12:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:13:24.183 12:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:13:24.183 12:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:24.183 12:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:24.183 12:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:24.183 12:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:24.183 12:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:24.183 12:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:24.183 12:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:24.183 12:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:24.183 12:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:24.183 12:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:24.183 12:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:24.183 12:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:13:24.183 12:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:13:24.183 12:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:13:24.183 12:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65099 00:13:24.183 12:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:24.183 12:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65099 00:13:24.183 12:12:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 65099 ']' 00:13:24.442 12:12:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:24.442 12:12:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:24.442 12:12:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:24.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:24.442 12:12:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:24.442 12:12:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.442 [2024-11-25 12:12:20.359931] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:13:24.442 [2024-11-25 12:12:20.360091] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65099 ] 00:13:24.442 [2024-11-25 12:12:20.531372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:24.701 [2024-11-25 12:12:20.703117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.960 [2024-11-25 12:12:20.931589] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:24.960 [2024-11-25 12:12:20.931669] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:25.527 12:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:25.527 12:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:13:25.527 12:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:25.527 12:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:25.527 12:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:25.527 12:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:25.527 12:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:25.527 12:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:25.527 12:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:25.527 12:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:25.527 12:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:25.527 12:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.527 12:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.527 malloc1 00:13:25.527 12:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.527 12:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:25.527 12:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.527 12:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.527 [2024-11-25 12:12:21.459121] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:25.527 [2024-11-25 12:12:21.459388] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:25.527 [2024-11-25 12:12:21.459471] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:25.527 [2024-11-25 12:12:21.459671] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:25.527 [2024-11-25 12:12:21.462541] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:25.527 [2024-11-25 12:12:21.462716] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:25.527 pt1 00:13:25.527 12:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.527 12:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:25.527 12:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:25.527 12:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:25.527 12:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:25.527 12:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:25.527 12:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:25.527 12:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:25.527 12:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:25.527 12:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:25.527 12:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.527 12:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.527 malloc2 00:13:25.527 12:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.527 12:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:25.527 12:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.528 12:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.528 [2024-11-25 12:12:21.515679] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:25.528 [2024-11-25 12:12:21.515879] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:25.528 [2024-11-25 12:12:21.515968] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:25.528 [2024-11-25 12:12:21.516103] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:25.528 [2024-11-25 12:12:21.518940] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:25.528 [2024-11-25 12:12:21.519116] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:25.528 pt2 00:13:25.528 12:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.528 12:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:25.528 12:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:25.528 12:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:25.528 12:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:25.528 12:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:25.528 12:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:25.528 12:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:25.528 12:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:25.528 12:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:25.528 12:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.528 12:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.528 malloc3 00:13:25.528 12:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.528 12:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:25.528 12:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.528 12:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.528 [2024-11-25 12:12:21.589467] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:25.528 [2024-11-25 12:12:21.589530] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:25.528 [2024-11-25 12:12:21.589578] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:25.528 [2024-11-25 12:12:21.589594] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:25.528 [2024-11-25 12:12:21.592319] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:25.528 [2024-11-25 12:12:21.592382] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:25.528 pt3 00:13:25.528 12:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.528 12:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:25.528 12:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:25.528 12:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:13:25.528 12:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.528 12:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.528 [2024-11-25 12:12:21.601524] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:25.528 [2024-11-25 12:12:21.603941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:25.528 [2024-11-25 12:12:21.604176] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:25.528 [2024-11-25 12:12:21.604418] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:25.528 [2024-11-25 12:12:21.604443] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:25.528 [2024-11-25 12:12:21.604776] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:25.528 [2024-11-25 12:12:21.604993] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:25.528 [2024-11-25 12:12:21.605010] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:25.528 [2024-11-25 12:12:21.605197] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:25.528 12:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.528 12:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:13:25.528 12:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:25.528 12:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:25.528 12:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:25.528 12:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:25.528 12:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:25.528 12:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.528 12:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.528 12:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.528 12:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.528 12:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.528 12:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.528 12:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.528 12:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.786 12:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.786 12:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.786 "name": "raid_bdev1", 00:13:25.786 "uuid": "32117c24-ac4a-4a3f-9f8a-ba8a63d88895", 00:13:25.786 "strip_size_kb": 64, 00:13:25.786 "state": "online", 00:13:25.786 "raid_level": "raid0", 00:13:25.786 "superblock": true, 00:13:25.786 "num_base_bdevs": 3, 00:13:25.786 "num_base_bdevs_discovered": 3, 00:13:25.786 "num_base_bdevs_operational": 3, 00:13:25.786 "base_bdevs_list": [ 00:13:25.786 { 00:13:25.786 "name": "pt1", 00:13:25.787 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:25.787 "is_configured": true, 00:13:25.787 "data_offset": 2048, 00:13:25.787 "data_size": 63488 00:13:25.787 }, 00:13:25.787 { 00:13:25.787 "name": "pt2", 00:13:25.787 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:25.787 "is_configured": true, 00:13:25.787 "data_offset": 2048, 00:13:25.787 "data_size": 63488 00:13:25.787 }, 00:13:25.787 { 00:13:25.787 "name": "pt3", 00:13:25.787 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:25.787 "is_configured": true, 00:13:25.787 "data_offset": 2048, 00:13:25.787 "data_size": 63488 00:13:25.787 } 00:13:25.787 ] 00:13:25.787 }' 00:13:25.787 12:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.787 12:12:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.045 12:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:26.045 12:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:26.045 12:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:26.045 12:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:26.045 12:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:26.045 12:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:26.045 12:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:26.045 12:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.045 12:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.045 12:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:26.045 [2024-11-25 12:12:22.122102] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:26.304 12:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.304 12:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:26.304 "name": "raid_bdev1", 00:13:26.304 "aliases": [ 00:13:26.304 "32117c24-ac4a-4a3f-9f8a-ba8a63d88895" 00:13:26.304 ], 00:13:26.304 "product_name": "Raid Volume", 00:13:26.304 "block_size": 512, 00:13:26.304 "num_blocks": 190464, 00:13:26.304 "uuid": "32117c24-ac4a-4a3f-9f8a-ba8a63d88895", 00:13:26.304 "assigned_rate_limits": { 00:13:26.304 "rw_ios_per_sec": 0, 00:13:26.304 "rw_mbytes_per_sec": 0, 00:13:26.304 "r_mbytes_per_sec": 0, 00:13:26.304 "w_mbytes_per_sec": 0 00:13:26.304 }, 00:13:26.304 "claimed": false, 00:13:26.304 "zoned": false, 00:13:26.304 "supported_io_types": { 00:13:26.304 "read": true, 00:13:26.304 "write": true, 00:13:26.304 "unmap": true, 00:13:26.304 "flush": true, 00:13:26.304 "reset": true, 00:13:26.304 "nvme_admin": false, 00:13:26.304 "nvme_io": false, 00:13:26.304 "nvme_io_md": false, 00:13:26.304 "write_zeroes": true, 00:13:26.304 "zcopy": false, 00:13:26.304 "get_zone_info": false, 00:13:26.304 "zone_management": false, 00:13:26.304 "zone_append": false, 00:13:26.304 "compare": false, 00:13:26.304 "compare_and_write": false, 00:13:26.304 "abort": false, 00:13:26.304 "seek_hole": false, 00:13:26.304 "seek_data": false, 00:13:26.304 "copy": false, 00:13:26.304 "nvme_iov_md": false 00:13:26.304 }, 00:13:26.304 "memory_domains": [ 00:13:26.304 { 00:13:26.304 "dma_device_id": "system", 00:13:26.304 "dma_device_type": 1 00:13:26.304 }, 00:13:26.304 { 00:13:26.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.304 "dma_device_type": 2 00:13:26.304 }, 00:13:26.304 { 00:13:26.304 "dma_device_id": "system", 00:13:26.304 "dma_device_type": 1 00:13:26.304 }, 00:13:26.304 { 00:13:26.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.304 "dma_device_type": 2 00:13:26.304 }, 00:13:26.304 { 00:13:26.304 "dma_device_id": "system", 00:13:26.304 "dma_device_type": 1 00:13:26.304 }, 00:13:26.304 { 00:13:26.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.304 "dma_device_type": 2 00:13:26.304 } 00:13:26.304 ], 00:13:26.304 "driver_specific": { 00:13:26.304 "raid": { 00:13:26.304 "uuid": "32117c24-ac4a-4a3f-9f8a-ba8a63d88895", 00:13:26.304 "strip_size_kb": 64, 00:13:26.304 "state": "online", 00:13:26.304 "raid_level": "raid0", 00:13:26.304 "superblock": true, 00:13:26.304 "num_base_bdevs": 3, 00:13:26.304 "num_base_bdevs_discovered": 3, 00:13:26.304 "num_base_bdevs_operational": 3, 00:13:26.304 "base_bdevs_list": [ 00:13:26.304 { 00:13:26.304 "name": "pt1", 00:13:26.304 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:26.304 "is_configured": true, 00:13:26.304 "data_offset": 2048, 00:13:26.304 "data_size": 63488 00:13:26.304 }, 00:13:26.304 { 00:13:26.304 "name": "pt2", 00:13:26.304 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:26.304 "is_configured": true, 00:13:26.304 "data_offset": 2048, 00:13:26.304 "data_size": 63488 00:13:26.304 }, 00:13:26.304 { 00:13:26.304 "name": "pt3", 00:13:26.304 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:26.304 "is_configured": true, 00:13:26.304 "data_offset": 2048, 00:13:26.304 "data_size": 63488 00:13:26.304 } 00:13:26.304 ] 00:13:26.304 } 00:13:26.305 } 00:13:26.305 }' 00:13:26.305 12:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:26.305 12:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:26.305 pt2 00:13:26.305 pt3' 00:13:26.305 12:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:26.305 12:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:26.305 12:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:26.305 12:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:26.305 12:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:26.305 12:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.305 12:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.305 12:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.305 12:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:26.305 12:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:26.305 12:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:26.305 12:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:26.305 12:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:26.305 12:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.305 12:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.305 12:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.305 12:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:26.305 12:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:26.305 12:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:26.305 12:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:26.305 12:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.305 12:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.305 12:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:26.305 12:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.564 12:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:26.564 12:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:26.564 12:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:26.564 12:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.564 12:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.564 12:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:26.564 [2024-11-25 12:12:22.422094] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:26.564 12:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.564 12:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=32117c24-ac4a-4a3f-9f8a-ba8a63d88895 00:13:26.564 12:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 32117c24-ac4a-4a3f-9f8a-ba8a63d88895 ']' 00:13:26.564 12:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:26.564 12:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.564 12:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.564 [2024-11-25 12:12:22.473762] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:26.564 [2024-11-25 12:12:22.473796] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:26.564 [2024-11-25 12:12:22.473886] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:26.564 [2024-11-25 12:12:22.473998] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:26.565 [2024-11-25 12:12:22.474027] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:26.565 12:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.565 12:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.565 12:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.565 12:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.565 12:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:26.565 12:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.565 12:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:26.565 12:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:26.565 12:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:26.565 12:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:26.565 12:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.565 12:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.565 12:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.565 12:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:26.565 12:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:26.565 12:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.565 12:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.565 12:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.565 12:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:26.565 12:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:26.565 12:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.565 12:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.565 12:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.565 12:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:26.565 12:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:26.565 12:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.565 12:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.565 12:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.565 12:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:26.565 12:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:26.565 12:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:13:26.565 12:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:26.565 12:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:26.565 12:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:26.565 12:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:26.565 12:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:26.565 12:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:26.565 12:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.565 12:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.565 [2024-11-25 12:12:22.641838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:26.565 [2024-11-25 12:12:22.644537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:26.565 [2024-11-25 12:12:22.644736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:26.565 [2024-11-25 12:12:22.644855] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:26.565 [2024-11-25 12:12:22.645109] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:26.565 [2024-11-25 12:12:22.645284] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:26.565 [2024-11-25 12:12:22.645492] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:26.565 [2024-11-25 12:12:22.645622] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:13:26.565 request: 00:13:26.565 { 00:13:26.565 "name": "raid_bdev1", 00:13:26.565 "raid_level": "raid0", 00:13:26.565 "base_bdevs": [ 00:13:26.565 "malloc1", 00:13:26.565 "malloc2", 00:13:26.565 "malloc3" 00:13:26.565 ], 00:13:26.565 "strip_size_kb": 64, 00:13:26.565 "superblock": false, 00:13:26.565 "method": "bdev_raid_create", 00:13:26.565 "req_id": 1 00:13:26.565 } 00:13:26.565 Got JSON-RPC error response 00:13:26.565 response: 00:13:26.565 { 00:13:26.565 "code": -17, 00:13:26.565 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:26.565 } 00:13:26.565 12:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:26.565 12:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:13:26.565 12:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:26.565 12:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:26.565 12:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:26.824 12:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.824 12:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.824 12:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.824 12:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:26.824 12:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.824 12:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:26.824 12:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:26.824 12:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:26.824 12:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.824 12:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.824 [2024-11-25 12:12:22.714054] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:26.824 [2024-11-25 12:12:22.714265] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:26.824 [2024-11-25 12:12:22.714431] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:26.824 [2024-11-25 12:12:22.714576] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:26.824 [2024-11-25 12:12:22.717600] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:26.824 [2024-11-25 12:12:22.717756] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:26.824 [2024-11-25 12:12:22.717990] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:26.824 [2024-11-25 12:12:22.718200] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:26.824 pt1 00:13:26.824 12:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.824 12:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:13:26.824 12:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:26.824 12:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:26.824 12:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:26.824 12:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:26.824 12:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:26.824 12:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.824 12:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.824 12:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.824 12:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.824 12:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.824 12:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.824 12:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.824 12:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.824 12:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.824 12:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.824 "name": "raid_bdev1", 00:13:26.824 "uuid": "32117c24-ac4a-4a3f-9f8a-ba8a63d88895", 00:13:26.824 "strip_size_kb": 64, 00:13:26.824 "state": "configuring", 00:13:26.824 "raid_level": "raid0", 00:13:26.824 "superblock": true, 00:13:26.824 "num_base_bdevs": 3, 00:13:26.824 "num_base_bdevs_discovered": 1, 00:13:26.824 "num_base_bdevs_operational": 3, 00:13:26.824 "base_bdevs_list": [ 00:13:26.824 { 00:13:26.824 "name": "pt1", 00:13:26.824 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:26.824 "is_configured": true, 00:13:26.824 "data_offset": 2048, 00:13:26.824 "data_size": 63488 00:13:26.824 }, 00:13:26.824 { 00:13:26.824 "name": null, 00:13:26.824 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:26.824 "is_configured": false, 00:13:26.824 "data_offset": 2048, 00:13:26.824 "data_size": 63488 00:13:26.824 }, 00:13:26.824 { 00:13:26.824 "name": null, 00:13:26.824 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:26.824 "is_configured": false, 00:13:26.824 "data_offset": 2048, 00:13:26.824 "data_size": 63488 00:13:26.824 } 00:13:26.824 ] 00:13:26.824 }' 00:13:26.825 12:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.825 12:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.392 12:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:13:27.392 12:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:27.392 12:12:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.392 12:12:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.392 [2024-11-25 12:12:23.242264] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:27.392 [2024-11-25 12:12:23.242373] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:27.392 [2024-11-25 12:12:23.242411] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:27.392 [2024-11-25 12:12:23.242426] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:27.392 [2024-11-25 12:12:23.242973] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:27.392 [2024-11-25 12:12:23.243028] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:27.392 [2024-11-25 12:12:23.243139] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:27.392 [2024-11-25 12:12:23.243172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:27.392 pt2 00:13:27.392 12:12:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.392 12:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:27.392 12:12:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.392 12:12:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.392 [2024-11-25 12:12:23.250252] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:27.392 12:12:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.392 12:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:13:27.392 12:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:27.392 12:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:27.392 12:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:27.393 12:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:27.393 12:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:27.393 12:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.393 12:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.393 12:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.393 12:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.393 12:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.393 12:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.393 12:12:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.393 12:12:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.393 12:12:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.393 12:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.393 "name": "raid_bdev1", 00:13:27.393 "uuid": "32117c24-ac4a-4a3f-9f8a-ba8a63d88895", 00:13:27.393 "strip_size_kb": 64, 00:13:27.393 "state": "configuring", 00:13:27.393 "raid_level": "raid0", 00:13:27.393 "superblock": true, 00:13:27.393 "num_base_bdevs": 3, 00:13:27.393 "num_base_bdevs_discovered": 1, 00:13:27.393 "num_base_bdevs_operational": 3, 00:13:27.393 "base_bdevs_list": [ 00:13:27.393 { 00:13:27.393 "name": "pt1", 00:13:27.393 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:27.393 "is_configured": true, 00:13:27.393 "data_offset": 2048, 00:13:27.393 "data_size": 63488 00:13:27.393 }, 00:13:27.393 { 00:13:27.393 "name": null, 00:13:27.393 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:27.393 "is_configured": false, 00:13:27.393 "data_offset": 0, 00:13:27.393 "data_size": 63488 00:13:27.393 }, 00:13:27.393 { 00:13:27.393 "name": null, 00:13:27.393 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:27.393 "is_configured": false, 00:13:27.393 "data_offset": 2048, 00:13:27.393 "data_size": 63488 00:13:27.393 } 00:13:27.393 ] 00:13:27.393 }' 00:13:27.393 12:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.393 12:12:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.664 12:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:27.664 12:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:27.664 12:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:27.664 12:12:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.664 12:12:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.923 [2024-11-25 12:12:23.758371] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:27.923 [2024-11-25 12:12:23.758604] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:27.923 [2024-11-25 12:12:23.758644] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:13:27.923 [2024-11-25 12:12:23.758664] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:27.923 [2024-11-25 12:12:23.759241] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:27.923 [2024-11-25 12:12:23.759274] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:27.923 [2024-11-25 12:12:23.759407] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:27.923 [2024-11-25 12:12:23.759445] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:27.923 pt2 00:13:27.923 12:12:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.923 12:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:27.923 12:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:27.923 12:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:27.923 12:12:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.923 12:12:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.923 [2024-11-25 12:12:23.766350] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:27.923 [2024-11-25 12:12:23.766403] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:27.923 [2024-11-25 12:12:23.766427] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:27.923 [2024-11-25 12:12:23.766443] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:27.923 [2024-11-25 12:12:23.766901] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:27.923 [2024-11-25 12:12:23.766935] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:27.923 [2024-11-25 12:12:23.767010] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:27.923 [2024-11-25 12:12:23.767044] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:27.923 [2024-11-25 12:12:23.767189] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:27.923 [2024-11-25 12:12:23.767210] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:27.923 [2024-11-25 12:12:23.767533] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:27.923 [2024-11-25 12:12:23.767719] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:27.923 [2024-11-25 12:12:23.767734] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:27.923 [2024-11-25 12:12:23.767907] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:27.923 pt3 00:13:27.923 12:12:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.923 12:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:27.923 12:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:27.923 12:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:13:27.923 12:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:27.923 12:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:27.923 12:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:27.923 12:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:27.923 12:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:27.923 12:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.923 12:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.923 12:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.923 12:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.923 12:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.923 12:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.923 12:12:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.923 12:12:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.923 12:12:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.923 12:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.923 "name": "raid_bdev1", 00:13:27.923 "uuid": "32117c24-ac4a-4a3f-9f8a-ba8a63d88895", 00:13:27.923 "strip_size_kb": 64, 00:13:27.923 "state": "online", 00:13:27.923 "raid_level": "raid0", 00:13:27.923 "superblock": true, 00:13:27.923 "num_base_bdevs": 3, 00:13:27.923 "num_base_bdevs_discovered": 3, 00:13:27.923 "num_base_bdevs_operational": 3, 00:13:27.923 "base_bdevs_list": [ 00:13:27.923 { 00:13:27.923 "name": "pt1", 00:13:27.923 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:27.923 "is_configured": true, 00:13:27.923 "data_offset": 2048, 00:13:27.923 "data_size": 63488 00:13:27.923 }, 00:13:27.923 { 00:13:27.923 "name": "pt2", 00:13:27.924 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:27.924 "is_configured": true, 00:13:27.924 "data_offset": 2048, 00:13:27.924 "data_size": 63488 00:13:27.924 }, 00:13:27.924 { 00:13:27.924 "name": "pt3", 00:13:27.924 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:27.924 "is_configured": true, 00:13:27.924 "data_offset": 2048, 00:13:27.924 "data_size": 63488 00:13:27.924 } 00:13:27.924 ] 00:13:27.924 }' 00:13:27.924 12:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.924 12:12:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.492 12:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:28.492 12:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:28.492 12:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:28.492 12:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:28.492 12:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:28.492 12:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:28.492 12:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:28.492 12:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:28.492 12:12:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.492 12:12:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.492 [2024-11-25 12:12:24.298930] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:28.492 12:12:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.492 12:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:28.492 "name": "raid_bdev1", 00:13:28.492 "aliases": [ 00:13:28.492 "32117c24-ac4a-4a3f-9f8a-ba8a63d88895" 00:13:28.492 ], 00:13:28.492 "product_name": "Raid Volume", 00:13:28.492 "block_size": 512, 00:13:28.492 "num_blocks": 190464, 00:13:28.492 "uuid": "32117c24-ac4a-4a3f-9f8a-ba8a63d88895", 00:13:28.492 "assigned_rate_limits": { 00:13:28.492 "rw_ios_per_sec": 0, 00:13:28.492 "rw_mbytes_per_sec": 0, 00:13:28.492 "r_mbytes_per_sec": 0, 00:13:28.492 "w_mbytes_per_sec": 0 00:13:28.492 }, 00:13:28.492 "claimed": false, 00:13:28.492 "zoned": false, 00:13:28.492 "supported_io_types": { 00:13:28.492 "read": true, 00:13:28.492 "write": true, 00:13:28.492 "unmap": true, 00:13:28.492 "flush": true, 00:13:28.492 "reset": true, 00:13:28.492 "nvme_admin": false, 00:13:28.492 "nvme_io": false, 00:13:28.492 "nvme_io_md": false, 00:13:28.492 "write_zeroes": true, 00:13:28.492 "zcopy": false, 00:13:28.492 "get_zone_info": false, 00:13:28.492 "zone_management": false, 00:13:28.492 "zone_append": false, 00:13:28.492 "compare": false, 00:13:28.492 "compare_and_write": false, 00:13:28.492 "abort": false, 00:13:28.492 "seek_hole": false, 00:13:28.492 "seek_data": false, 00:13:28.492 "copy": false, 00:13:28.492 "nvme_iov_md": false 00:13:28.492 }, 00:13:28.492 "memory_domains": [ 00:13:28.492 { 00:13:28.492 "dma_device_id": "system", 00:13:28.492 "dma_device_type": 1 00:13:28.492 }, 00:13:28.492 { 00:13:28.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:28.492 "dma_device_type": 2 00:13:28.492 }, 00:13:28.492 { 00:13:28.492 "dma_device_id": "system", 00:13:28.492 "dma_device_type": 1 00:13:28.492 }, 00:13:28.492 { 00:13:28.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:28.492 "dma_device_type": 2 00:13:28.492 }, 00:13:28.492 { 00:13:28.492 "dma_device_id": "system", 00:13:28.492 "dma_device_type": 1 00:13:28.492 }, 00:13:28.492 { 00:13:28.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:28.492 "dma_device_type": 2 00:13:28.492 } 00:13:28.492 ], 00:13:28.492 "driver_specific": { 00:13:28.492 "raid": { 00:13:28.492 "uuid": "32117c24-ac4a-4a3f-9f8a-ba8a63d88895", 00:13:28.492 "strip_size_kb": 64, 00:13:28.492 "state": "online", 00:13:28.492 "raid_level": "raid0", 00:13:28.492 "superblock": true, 00:13:28.492 "num_base_bdevs": 3, 00:13:28.492 "num_base_bdevs_discovered": 3, 00:13:28.492 "num_base_bdevs_operational": 3, 00:13:28.492 "base_bdevs_list": [ 00:13:28.492 { 00:13:28.492 "name": "pt1", 00:13:28.492 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:28.492 "is_configured": true, 00:13:28.492 "data_offset": 2048, 00:13:28.492 "data_size": 63488 00:13:28.492 }, 00:13:28.492 { 00:13:28.492 "name": "pt2", 00:13:28.492 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:28.492 "is_configured": true, 00:13:28.492 "data_offset": 2048, 00:13:28.492 "data_size": 63488 00:13:28.492 }, 00:13:28.492 { 00:13:28.492 "name": "pt3", 00:13:28.492 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:28.492 "is_configured": true, 00:13:28.492 "data_offset": 2048, 00:13:28.492 "data_size": 63488 00:13:28.492 } 00:13:28.492 ] 00:13:28.492 } 00:13:28.492 } 00:13:28.492 }' 00:13:28.492 12:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:28.492 12:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:28.492 pt2 00:13:28.492 pt3' 00:13:28.492 12:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:28.492 12:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:28.492 12:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:28.492 12:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:28.492 12:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:28.492 12:12:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.492 12:12:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.492 12:12:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.492 12:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:28.492 12:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:28.492 12:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:28.492 12:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:28.492 12:12:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.492 12:12:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.492 12:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:28.492 12:12:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.492 12:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:28.492 12:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:28.492 12:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:28.492 12:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:28.492 12:12:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.492 12:12:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.492 12:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:28.492 12:12:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.751 12:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:28.751 12:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:28.751 12:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:28.751 12:12:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.751 12:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:28.751 12:12:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.751 [2024-11-25 12:12:24.606945] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:28.751 12:12:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.751 12:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 32117c24-ac4a-4a3f-9f8a-ba8a63d88895 '!=' 32117c24-ac4a-4a3f-9f8a-ba8a63d88895 ']' 00:13:28.751 12:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:13:28.751 12:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:28.751 12:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:28.751 12:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65099 00:13:28.751 12:12:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 65099 ']' 00:13:28.751 12:12:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 65099 00:13:28.751 12:12:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:13:28.751 12:12:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:28.751 12:12:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65099 00:13:28.751 killing process with pid 65099 00:13:28.751 12:12:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:28.751 12:12:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:28.751 12:12:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65099' 00:13:28.751 12:12:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 65099 00:13:28.751 [2024-11-25 12:12:24.683817] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:28.751 12:12:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 65099 00:13:28.751 [2024-11-25 12:12:24.683937] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:28.751 [2024-11-25 12:12:24.684017] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:28.751 [2024-11-25 12:12:24.684037] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:29.009 [2024-11-25 12:12:24.953927] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:29.945 12:12:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:29.945 00:13:29.945 real 0m5.727s 00:13:29.945 user 0m8.666s 00:13:29.945 sys 0m0.801s 00:13:29.945 12:12:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:29.945 12:12:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.945 ************************************ 00:13:29.945 END TEST raid_superblock_test 00:13:29.945 ************************************ 00:13:30.207 12:12:26 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:13:30.207 12:12:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:30.207 12:12:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:30.207 12:12:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:30.207 ************************************ 00:13:30.207 START TEST raid_read_error_test 00:13:30.207 ************************************ 00:13:30.207 12:12:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:13:30.207 12:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:13:30.207 12:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:13:30.207 12:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:13:30.207 12:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:30.207 12:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:30.207 12:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:30.207 12:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:30.207 12:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:30.208 12:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:30.208 12:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:30.208 12:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:30.208 12:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:30.208 12:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:30.208 12:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:30.208 12:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:30.208 12:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:30.208 12:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:30.208 12:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:30.208 12:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:30.208 12:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:30.208 12:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:30.208 12:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:13:30.208 12:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:30.208 12:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:30.208 12:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:30.208 12:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.6eBrzDogy8 00:13:30.208 12:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65357 00:13:30.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:30.208 12:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65357 00:13:30.208 12:12:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65357 ']' 00:13:30.208 12:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:30.208 12:12:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:30.208 12:12:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:30.208 12:12:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:30.208 12:12:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:30.208 12:12:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.208 [2024-11-25 12:12:26.177417] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:13:30.208 [2024-11-25 12:12:26.177793] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65357 ] 00:13:30.465 [2024-11-25 12:12:26.374083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:30.465 [2024-11-25 12:12:26.532655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.725 [2024-11-25 12:12:26.754684] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:30.725 [2024-11-25 12:12:26.754776] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:31.289 12:12:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:31.289 12:12:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:31.289 12:12:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:31.289 12:12:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:31.289 12:12:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.289 12:12:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.289 BaseBdev1_malloc 00:13:31.289 12:12:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.289 12:12:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:31.289 12:12:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.289 12:12:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.289 true 00:13:31.289 12:12:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.289 12:12:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:31.289 12:12:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.289 12:12:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.289 [2024-11-25 12:12:27.259323] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:31.289 [2024-11-25 12:12:27.259412] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:31.289 [2024-11-25 12:12:27.259448] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:31.289 [2024-11-25 12:12:27.259467] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:31.289 [2024-11-25 12:12:27.262470] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:31.289 [2024-11-25 12:12:27.262524] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:31.289 BaseBdev1 00:13:31.290 12:12:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.290 12:12:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:31.290 12:12:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:31.290 12:12:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.290 12:12:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.290 BaseBdev2_malloc 00:13:31.290 12:12:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.290 12:12:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:31.290 12:12:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.290 12:12:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.290 true 00:13:31.290 12:12:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.290 12:12:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:31.290 12:12:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.290 12:12:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.290 [2024-11-25 12:12:27.331454] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:31.290 [2024-11-25 12:12:27.331530] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:31.290 [2024-11-25 12:12:27.331563] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:31.290 [2024-11-25 12:12:27.331581] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:31.290 [2024-11-25 12:12:27.334652] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:31.290 [2024-11-25 12:12:27.334719] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:31.290 BaseBdev2 00:13:31.290 12:12:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.290 12:12:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:31.290 12:12:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:31.290 12:12:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.290 12:12:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.548 BaseBdev3_malloc 00:13:31.548 12:12:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.548 12:12:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:31.548 12:12:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.548 12:12:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.548 true 00:13:31.548 12:12:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.548 12:12:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:31.548 12:12:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.548 12:12:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.548 [2024-11-25 12:12:27.409251] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:31.548 [2024-11-25 12:12:27.409522] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:31.548 [2024-11-25 12:12:27.409574] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:31.548 [2024-11-25 12:12:27.409595] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:31.548 [2024-11-25 12:12:27.412636] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:31.548 [2024-11-25 12:12:27.412817] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:31.548 BaseBdev3 00:13:31.548 12:12:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.548 12:12:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:13:31.548 12:12:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.548 12:12:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.548 [2024-11-25 12:12:27.421657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:31.548 [2024-11-25 12:12:27.424193] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:31.548 [2024-11-25 12:12:27.424465] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:31.548 [2024-11-25 12:12:27.424765] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:31.548 [2024-11-25 12:12:27.424788] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:31.548 [2024-11-25 12:12:27.425145] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:13:31.548 [2024-11-25 12:12:27.425398] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:31.548 [2024-11-25 12:12:27.425424] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:31.548 [2024-11-25 12:12:27.425684] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:31.548 12:12:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.548 12:12:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:13:31.548 12:12:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:31.548 12:12:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:31.548 12:12:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:31.548 12:12:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:31.548 12:12:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:31.548 12:12:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.548 12:12:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.548 12:12:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.548 12:12:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.548 12:12:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.548 12:12:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.548 12:12:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.548 12:12:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.548 12:12:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.548 12:12:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.548 "name": "raid_bdev1", 00:13:31.548 "uuid": "7272ca4b-d114-4110-b197-18efb6667cda", 00:13:31.548 "strip_size_kb": 64, 00:13:31.548 "state": "online", 00:13:31.548 "raid_level": "raid0", 00:13:31.548 "superblock": true, 00:13:31.548 "num_base_bdevs": 3, 00:13:31.548 "num_base_bdevs_discovered": 3, 00:13:31.548 "num_base_bdevs_operational": 3, 00:13:31.548 "base_bdevs_list": [ 00:13:31.548 { 00:13:31.548 "name": "BaseBdev1", 00:13:31.548 "uuid": "556588ac-bb51-5854-92d9-7ddc00092ffa", 00:13:31.548 "is_configured": true, 00:13:31.548 "data_offset": 2048, 00:13:31.548 "data_size": 63488 00:13:31.548 }, 00:13:31.548 { 00:13:31.548 "name": "BaseBdev2", 00:13:31.548 "uuid": "ee3a2e36-cbb9-577c-9bc8-25d28884f856", 00:13:31.548 "is_configured": true, 00:13:31.548 "data_offset": 2048, 00:13:31.548 "data_size": 63488 00:13:31.548 }, 00:13:31.548 { 00:13:31.548 "name": "BaseBdev3", 00:13:31.548 "uuid": "617a2bf1-3901-51d1-979a-1e6c156281b4", 00:13:31.548 "is_configured": true, 00:13:31.548 "data_offset": 2048, 00:13:31.548 "data_size": 63488 00:13:31.548 } 00:13:31.548 ] 00:13:31.548 }' 00:13:31.548 12:12:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.548 12:12:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.114 12:12:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:32.114 12:12:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:32.114 [2024-11-25 12:12:28.083225] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:13:33.051 12:12:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:33.051 12:12:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.051 12:12:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.051 12:12:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.051 12:12:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:33.051 12:12:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:13:33.051 12:12:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:13:33.051 12:12:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:13:33.051 12:12:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:33.051 12:12:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:33.051 12:12:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:33.051 12:12:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:33.051 12:12:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:33.051 12:12:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.051 12:12:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.051 12:12:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.051 12:12:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.051 12:12:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.051 12:12:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.051 12:12:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.051 12:12:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.051 12:12:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.051 12:12:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.051 "name": "raid_bdev1", 00:13:33.051 "uuid": "7272ca4b-d114-4110-b197-18efb6667cda", 00:13:33.051 "strip_size_kb": 64, 00:13:33.051 "state": "online", 00:13:33.051 "raid_level": "raid0", 00:13:33.051 "superblock": true, 00:13:33.051 "num_base_bdevs": 3, 00:13:33.051 "num_base_bdevs_discovered": 3, 00:13:33.051 "num_base_bdevs_operational": 3, 00:13:33.051 "base_bdevs_list": [ 00:13:33.051 { 00:13:33.051 "name": "BaseBdev1", 00:13:33.051 "uuid": "556588ac-bb51-5854-92d9-7ddc00092ffa", 00:13:33.051 "is_configured": true, 00:13:33.051 "data_offset": 2048, 00:13:33.051 "data_size": 63488 00:13:33.051 }, 00:13:33.051 { 00:13:33.051 "name": "BaseBdev2", 00:13:33.051 "uuid": "ee3a2e36-cbb9-577c-9bc8-25d28884f856", 00:13:33.051 "is_configured": true, 00:13:33.051 "data_offset": 2048, 00:13:33.051 "data_size": 63488 00:13:33.051 }, 00:13:33.051 { 00:13:33.051 "name": "BaseBdev3", 00:13:33.051 "uuid": "617a2bf1-3901-51d1-979a-1e6c156281b4", 00:13:33.051 "is_configured": true, 00:13:33.051 "data_offset": 2048, 00:13:33.051 "data_size": 63488 00:13:33.051 } 00:13:33.051 ] 00:13:33.051 }' 00:13:33.051 12:12:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.051 12:12:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.616 12:12:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:33.616 12:12:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.616 12:12:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.616 [2024-11-25 12:12:29.521691] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:33.616 [2024-11-25 12:12:29.521877] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:33.616 [2024-11-25 12:12:29.525660] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:33.616 [2024-11-25 12:12:29.525959] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:33.616 [2024-11-25 12:12:29.526201] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr{ 00:13:33.616 "results": [ 00:13:33.616 { 00:13:33.616 "job": "raid_bdev1", 00:13:33.616 "core_mask": "0x1", 00:13:33.616 "workload": "randrw", 00:13:33.616 "percentage": 50, 00:13:33.616 "status": "finished", 00:13:33.616 "queue_depth": 1, 00:13:33.616 "io_size": 131072, 00:13:33.616 "runtime": 1.436043, 00:13:33.616 "iops": 9615.310962136928, 00:13:33.616 "mibps": 1201.913870267116, 00:13:33.616 "io_failed": 1, 00:13:33.616 "io_timeout": 0, 00:13:33.616 "avg_latency_us": 145.25012751894351, 00:13:33.616 "min_latency_us": 28.85818181818182, 00:13:33.616 "max_latency_us": 1832.0290909090909 00:13:33.616 } 00:13:33.616 ], 00:13:33.616 "core_count": 1 00:13:33.616 } 00:13:33.616 ee all in destruct 00:13:33.616 [2024-11-25 12:12:29.526372] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:33.616 12:12:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.617 12:12:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65357 00:13:33.617 12:12:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65357 ']' 00:13:33.617 12:12:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65357 00:13:33.617 12:12:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:13:33.617 12:12:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:33.617 12:12:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65357 00:13:33.617 killing process with pid 65357 00:13:33.617 12:12:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:33.617 12:12:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:33.617 12:12:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65357' 00:13:33.617 12:12:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65357 00:13:33.617 [2024-11-25 12:12:29.567642] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:33.617 12:12:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65357 00:13:33.876 [2024-11-25 12:12:29.783036] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:34.811 12:12:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.6eBrzDogy8 00:13:34.811 12:12:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:34.811 12:12:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:34.811 12:12:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:13:34.811 12:12:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:13:34.811 12:12:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:34.811 12:12:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:34.811 12:12:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:13:34.811 00:13:34.811 real 0m4.821s 00:13:34.811 user 0m6.005s 00:13:34.811 sys 0m0.594s 00:13:34.811 ************************************ 00:13:34.811 END TEST raid_read_error_test 00:13:34.811 ************************************ 00:13:34.811 12:12:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:34.811 12:12:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.072 12:12:30 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:13:35.072 12:12:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:35.072 12:12:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:35.072 12:12:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:35.072 ************************************ 00:13:35.072 START TEST raid_write_error_test 00:13:35.072 ************************************ 00:13:35.072 12:12:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:13:35.072 12:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:13:35.072 12:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:13:35.072 12:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:35.072 12:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:35.072 12:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:35.072 12:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:35.072 12:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:35.072 12:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:35.072 12:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:35.072 12:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:35.072 12:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:35.072 12:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:35.072 12:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:35.072 12:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:35.072 12:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:35.072 12:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:35.072 12:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:35.072 12:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:35.072 12:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:35.072 12:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:35.072 12:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:35.072 12:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:13:35.072 12:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:35.072 12:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:35.072 12:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:35.072 12:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ediis9SRJr 00:13:35.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:35.072 12:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65503 00:13:35.072 12:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:35.072 12:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65503 00:13:35.072 12:12:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65503 ']' 00:13:35.072 12:12:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:35.072 12:12:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:35.072 12:12:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:35.072 12:12:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:35.072 12:12:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.072 [2024-11-25 12:12:31.042323] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:13:35.072 [2024-11-25 12:12:31.042740] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65503 ] 00:13:35.331 [2024-11-25 12:12:31.224908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:35.331 [2024-11-25 12:12:31.354556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:35.589 [2024-11-25 12:12:31.559824] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:35.589 [2024-11-25 12:12:31.560031] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:36.156 12:12:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:36.156 12:12:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:36.156 12:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:36.156 12:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:36.156 12:12:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.156 12:12:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.156 BaseBdev1_malloc 00:13:36.156 12:12:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.156 12:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:36.156 12:12:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.156 12:12:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.156 true 00:13:36.156 12:12:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.156 12:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:36.156 12:12:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.156 12:12:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.156 [2024-11-25 12:12:32.125572] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:36.156 [2024-11-25 12:12:32.125652] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:36.156 [2024-11-25 12:12:32.125687] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:36.156 [2024-11-25 12:12:32.125706] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:36.156 [2024-11-25 12:12:32.128576] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:36.156 [2024-11-25 12:12:32.128629] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:36.156 BaseBdev1 00:13:36.157 12:12:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.157 12:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:36.157 12:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:36.157 12:12:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.157 12:12:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.157 BaseBdev2_malloc 00:13:36.157 12:12:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.157 12:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:36.157 12:12:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.157 12:12:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.157 true 00:13:36.157 12:12:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.157 12:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:36.157 12:12:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.157 12:12:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.157 [2024-11-25 12:12:32.182923] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:36.157 [2024-11-25 12:12:32.183004] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:36.157 [2024-11-25 12:12:32.183035] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:36.157 [2024-11-25 12:12:32.183052] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:36.157 [2024-11-25 12:12:32.185836] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:36.157 [2024-11-25 12:12:32.186050] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:36.157 BaseBdev2 00:13:36.157 12:12:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.157 12:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:36.157 12:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:36.157 12:12:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.157 12:12:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.157 BaseBdev3_malloc 00:13:36.157 12:12:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.157 12:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:36.157 12:12:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.157 12:12:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.417 true 00:13:36.417 12:12:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.417 12:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:36.417 12:12:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.417 12:12:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.417 [2024-11-25 12:12:32.252211] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:36.417 [2024-11-25 12:12:32.252282] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:36.417 [2024-11-25 12:12:32.252315] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:36.417 [2024-11-25 12:12:32.252333] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:36.417 [2024-11-25 12:12:32.255199] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:36.417 [2024-11-25 12:12:32.255252] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:36.417 BaseBdev3 00:13:36.417 12:12:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.417 12:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:13:36.417 12:12:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.417 12:12:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.417 [2024-11-25 12:12:32.260303] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:36.417 [2024-11-25 12:12:32.262915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:36.417 [2024-11-25 12:12:32.263031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:36.417 [2024-11-25 12:12:32.263312] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:36.417 [2024-11-25 12:12:32.263364] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:36.417 [2024-11-25 12:12:32.263688] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:13:36.417 [2024-11-25 12:12:32.263901] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:36.417 [2024-11-25 12:12:32.263925] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:36.417 [2024-11-25 12:12:32.264108] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:36.417 12:12:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.417 12:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:13:36.417 12:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:36.417 12:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:36.417 12:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:36.417 12:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:36.417 12:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:36.417 12:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.417 12:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.417 12:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.417 12:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.417 12:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.417 12:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.417 12:12:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.417 12:12:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.417 12:12:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.417 12:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.417 "name": "raid_bdev1", 00:13:36.417 "uuid": "cb5ae8c3-da6b-4711-b2e0-361591a4edea", 00:13:36.417 "strip_size_kb": 64, 00:13:36.417 "state": "online", 00:13:36.417 "raid_level": "raid0", 00:13:36.417 "superblock": true, 00:13:36.417 "num_base_bdevs": 3, 00:13:36.417 "num_base_bdevs_discovered": 3, 00:13:36.417 "num_base_bdevs_operational": 3, 00:13:36.417 "base_bdevs_list": [ 00:13:36.417 { 00:13:36.417 "name": "BaseBdev1", 00:13:36.417 "uuid": "aa031047-62f1-56e8-a788-87f74274a48e", 00:13:36.417 "is_configured": true, 00:13:36.417 "data_offset": 2048, 00:13:36.417 "data_size": 63488 00:13:36.417 }, 00:13:36.417 { 00:13:36.417 "name": "BaseBdev2", 00:13:36.417 "uuid": "23c8ceec-a5e1-5f38-9f95-a43fb93e49f4", 00:13:36.417 "is_configured": true, 00:13:36.417 "data_offset": 2048, 00:13:36.417 "data_size": 63488 00:13:36.417 }, 00:13:36.417 { 00:13:36.417 "name": "BaseBdev3", 00:13:36.417 "uuid": "6707ab2d-fe39-546e-ada4-230887b8c4cc", 00:13:36.417 "is_configured": true, 00:13:36.417 "data_offset": 2048, 00:13:36.417 "data_size": 63488 00:13:36.417 } 00:13:36.417 ] 00:13:36.417 }' 00:13:36.417 12:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.417 12:12:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.005 12:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:37.005 12:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:37.005 [2024-11-25 12:12:32.929867] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:13:37.942 12:12:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:37.942 12:12:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.942 12:12:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.942 12:12:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.942 12:12:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:37.942 12:12:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:13:37.942 12:12:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:13:37.942 12:12:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:13:37.942 12:12:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:37.942 12:12:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:37.942 12:12:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:37.942 12:12:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:37.942 12:12:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:37.942 12:12:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.942 12:12:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.942 12:12:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.942 12:12:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.942 12:12:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.942 12:12:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.942 12:12:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.942 12:12:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.942 12:12:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.942 12:12:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.942 "name": "raid_bdev1", 00:13:37.942 "uuid": "cb5ae8c3-da6b-4711-b2e0-361591a4edea", 00:13:37.942 "strip_size_kb": 64, 00:13:37.942 "state": "online", 00:13:37.942 "raid_level": "raid0", 00:13:37.942 "superblock": true, 00:13:37.942 "num_base_bdevs": 3, 00:13:37.942 "num_base_bdevs_discovered": 3, 00:13:37.942 "num_base_bdevs_operational": 3, 00:13:37.942 "base_bdevs_list": [ 00:13:37.942 { 00:13:37.942 "name": "BaseBdev1", 00:13:37.942 "uuid": "aa031047-62f1-56e8-a788-87f74274a48e", 00:13:37.942 "is_configured": true, 00:13:37.942 "data_offset": 2048, 00:13:37.942 "data_size": 63488 00:13:37.942 }, 00:13:37.942 { 00:13:37.942 "name": "BaseBdev2", 00:13:37.942 "uuid": "23c8ceec-a5e1-5f38-9f95-a43fb93e49f4", 00:13:37.942 "is_configured": true, 00:13:37.942 "data_offset": 2048, 00:13:37.942 "data_size": 63488 00:13:37.942 }, 00:13:37.942 { 00:13:37.942 "name": "BaseBdev3", 00:13:37.942 "uuid": "6707ab2d-fe39-546e-ada4-230887b8c4cc", 00:13:37.942 "is_configured": true, 00:13:37.942 "data_offset": 2048, 00:13:37.942 "data_size": 63488 00:13:37.942 } 00:13:37.942 ] 00:13:37.942 }' 00:13:37.942 12:12:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.942 12:12:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.510 12:12:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:38.510 12:12:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.510 12:12:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.510 [2024-11-25 12:12:34.336336] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:38.510 [2024-11-25 12:12:34.336544] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:38.510 [2024-11-25 12:12:34.340046] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:38.510 [2024-11-25 12:12:34.340231] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:38.510 [2024-11-25 12:12:34.340364] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:38.510 [2024-11-25 12:12:34.340617] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:38.510 { 00:13:38.510 "results": [ 00:13:38.510 { 00:13:38.510 "job": "raid_bdev1", 00:13:38.510 "core_mask": "0x1", 00:13:38.510 "workload": "randrw", 00:13:38.510 "percentage": 50, 00:13:38.510 "status": "finished", 00:13:38.510 "queue_depth": 1, 00:13:38.510 "io_size": 131072, 00:13:38.510 "runtime": 1.404205, 00:13:38.510 "iops": 9853.974312867424, 00:13:38.510 "mibps": 1231.746789108428, 00:13:38.510 "io_failed": 1, 00:13:38.510 "io_timeout": 0, 00:13:38.510 "avg_latency_us": 141.78495368484673, 00:13:38.510 "min_latency_us": 30.487272727272728, 00:13:38.510 "max_latency_us": 2308.6545454545453 00:13:38.510 } 00:13:38.510 ], 00:13:38.510 "core_count": 1 00:13:38.510 } 00:13:38.510 12:12:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.510 12:12:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65503 00:13:38.510 12:12:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65503 ']' 00:13:38.510 12:12:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65503 00:13:38.510 12:12:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:13:38.510 12:12:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:38.510 12:12:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65503 00:13:38.510 killing process with pid 65503 00:13:38.510 12:12:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:38.510 12:12:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:38.510 12:12:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65503' 00:13:38.510 12:12:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65503 00:13:38.510 12:12:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65503 00:13:38.510 [2024-11-25 12:12:34.373787] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:38.510 [2024-11-25 12:12:34.580891] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:39.883 12:12:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ediis9SRJr 00:13:39.883 12:12:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:39.883 12:12:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:39.883 12:12:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:13:39.883 12:12:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:13:39.883 12:12:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:39.883 12:12:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:39.883 12:12:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:13:39.883 ************************************ 00:13:39.883 END TEST raid_write_error_test 00:13:39.883 ************************************ 00:13:39.883 00:13:39.883 real 0m4.753s 00:13:39.883 user 0m5.965s 00:13:39.883 sys 0m0.576s 00:13:39.883 12:12:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:39.883 12:12:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.883 12:12:35 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:13:39.883 12:12:35 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:13:39.883 12:12:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:39.883 12:12:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:39.883 12:12:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:39.883 ************************************ 00:13:39.883 START TEST raid_state_function_test 00:13:39.883 ************************************ 00:13:39.883 12:12:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:13:39.883 12:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:13:39.883 12:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:13:39.883 12:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:39.883 12:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:39.883 12:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:39.883 12:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:39.883 12:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:39.883 12:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:39.883 12:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:39.883 12:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:39.883 12:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:39.883 12:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:39.883 12:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:39.883 12:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:39.883 12:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:39.883 12:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:39.883 12:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:39.884 12:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:39.884 12:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:39.884 12:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:39.884 12:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:39.884 12:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:13:39.884 12:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:39.884 12:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:39.884 12:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:39.884 12:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:39.884 12:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65648 00:13:39.884 12:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:39.884 Process raid pid: 65648 00:13:39.884 12:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65648' 00:13:39.884 12:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65648 00:13:39.884 12:12:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65648 ']' 00:13:39.884 12:12:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:39.884 12:12:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:39.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:39.884 12:12:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:39.884 12:12:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:39.884 12:12:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.884 [2024-11-25 12:12:35.845374] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:13:39.884 [2024-11-25 12:12:35.845573] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:40.142 [2024-11-25 12:12:36.031621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:40.142 [2024-11-25 12:12:36.162976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.399 [2024-11-25 12:12:36.369712] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:40.399 [2024-11-25 12:12:36.369954] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:40.966 12:12:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:40.966 12:12:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:13:40.966 12:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:40.966 12:12:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.966 12:12:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.966 [2024-11-25 12:12:36.805844] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:40.966 [2024-11-25 12:12:36.805907] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:40.966 [2024-11-25 12:12:36.805925] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:40.966 [2024-11-25 12:12:36.805941] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:40.966 [2024-11-25 12:12:36.805951] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:40.966 [2024-11-25 12:12:36.805965] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:40.966 12:12:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.966 12:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:40.966 12:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:40.966 12:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:40.966 12:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:40.966 12:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:40.966 12:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:40.966 12:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.966 12:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.966 12:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.966 12:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.966 12:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:40.966 12:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.966 12:12:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.966 12:12:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.966 12:12:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.967 12:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.967 "name": "Existed_Raid", 00:13:40.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.967 "strip_size_kb": 64, 00:13:40.967 "state": "configuring", 00:13:40.967 "raid_level": "concat", 00:13:40.967 "superblock": false, 00:13:40.967 "num_base_bdevs": 3, 00:13:40.967 "num_base_bdevs_discovered": 0, 00:13:40.967 "num_base_bdevs_operational": 3, 00:13:40.967 "base_bdevs_list": [ 00:13:40.967 { 00:13:40.967 "name": "BaseBdev1", 00:13:40.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.967 "is_configured": false, 00:13:40.967 "data_offset": 0, 00:13:40.967 "data_size": 0 00:13:40.967 }, 00:13:40.967 { 00:13:40.967 "name": "BaseBdev2", 00:13:40.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.967 "is_configured": false, 00:13:40.967 "data_offset": 0, 00:13:40.967 "data_size": 0 00:13:40.967 }, 00:13:40.967 { 00:13:40.967 "name": "BaseBdev3", 00:13:40.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.967 "is_configured": false, 00:13:40.967 "data_offset": 0, 00:13:40.967 "data_size": 0 00:13:40.967 } 00:13:40.967 ] 00:13:40.967 }' 00:13:40.967 12:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.967 12:12:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.226 12:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:41.226 12:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.226 12:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.494 [2024-11-25 12:12:37.317941] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:41.494 [2024-11-25 12:12:37.317986] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:41.494 12:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.494 12:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:41.494 12:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.494 12:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.494 [2024-11-25 12:12:37.325921] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:41.494 [2024-11-25 12:12:37.326109] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:41.494 [2024-11-25 12:12:37.326235] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:41.494 [2024-11-25 12:12:37.326414] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:41.494 [2024-11-25 12:12:37.326531] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:41.494 [2024-11-25 12:12:37.326659] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:41.494 12:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.494 12:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:41.494 12:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.494 12:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.494 [2024-11-25 12:12:37.371147] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:41.494 BaseBdev1 00:13:41.494 12:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.494 12:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:41.494 12:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:41.494 12:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:41.494 12:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:41.494 12:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:41.494 12:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:41.494 12:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:41.494 12:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.494 12:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.494 12:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.494 12:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:41.494 12:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.494 12:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.494 [ 00:13:41.494 { 00:13:41.494 "name": "BaseBdev1", 00:13:41.494 "aliases": [ 00:13:41.494 "8fd7a3df-6c10-45bc-ad65-51ad5d4f01d6" 00:13:41.494 ], 00:13:41.494 "product_name": "Malloc disk", 00:13:41.494 "block_size": 512, 00:13:41.494 "num_blocks": 65536, 00:13:41.494 "uuid": "8fd7a3df-6c10-45bc-ad65-51ad5d4f01d6", 00:13:41.494 "assigned_rate_limits": { 00:13:41.494 "rw_ios_per_sec": 0, 00:13:41.494 "rw_mbytes_per_sec": 0, 00:13:41.494 "r_mbytes_per_sec": 0, 00:13:41.494 "w_mbytes_per_sec": 0 00:13:41.494 }, 00:13:41.494 "claimed": true, 00:13:41.494 "claim_type": "exclusive_write", 00:13:41.494 "zoned": false, 00:13:41.494 "supported_io_types": { 00:13:41.494 "read": true, 00:13:41.494 "write": true, 00:13:41.494 "unmap": true, 00:13:41.494 "flush": true, 00:13:41.494 "reset": true, 00:13:41.494 "nvme_admin": false, 00:13:41.494 "nvme_io": false, 00:13:41.494 "nvme_io_md": false, 00:13:41.494 "write_zeroes": true, 00:13:41.494 "zcopy": true, 00:13:41.494 "get_zone_info": false, 00:13:41.494 "zone_management": false, 00:13:41.494 "zone_append": false, 00:13:41.494 "compare": false, 00:13:41.494 "compare_and_write": false, 00:13:41.494 "abort": true, 00:13:41.494 "seek_hole": false, 00:13:41.494 "seek_data": false, 00:13:41.494 "copy": true, 00:13:41.494 "nvme_iov_md": false 00:13:41.494 }, 00:13:41.494 "memory_domains": [ 00:13:41.494 { 00:13:41.494 "dma_device_id": "system", 00:13:41.494 "dma_device_type": 1 00:13:41.494 }, 00:13:41.494 { 00:13:41.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.494 "dma_device_type": 2 00:13:41.494 } 00:13:41.494 ], 00:13:41.494 "driver_specific": {} 00:13:41.494 } 00:13:41.494 ] 00:13:41.494 12:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.494 12:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:41.494 12:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:41.494 12:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:41.494 12:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:41.494 12:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:41.494 12:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:41.495 12:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:41.495 12:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.495 12:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.495 12:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.495 12:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.495 12:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.495 12:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.495 12:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.495 12:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:41.495 12:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.495 12:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.495 "name": "Existed_Raid", 00:13:41.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.495 "strip_size_kb": 64, 00:13:41.495 "state": "configuring", 00:13:41.495 "raid_level": "concat", 00:13:41.495 "superblock": false, 00:13:41.495 "num_base_bdevs": 3, 00:13:41.495 "num_base_bdevs_discovered": 1, 00:13:41.495 "num_base_bdevs_operational": 3, 00:13:41.495 "base_bdevs_list": [ 00:13:41.495 { 00:13:41.495 "name": "BaseBdev1", 00:13:41.495 "uuid": "8fd7a3df-6c10-45bc-ad65-51ad5d4f01d6", 00:13:41.495 "is_configured": true, 00:13:41.495 "data_offset": 0, 00:13:41.495 "data_size": 65536 00:13:41.495 }, 00:13:41.495 { 00:13:41.495 "name": "BaseBdev2", 00:13:41.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.495 "is_configured": false, 00:13:41.495 "data_offset": 0, 00:13:41.495 "data_size": 0 00:13:41.495 }, 00:13:41.495 { 00:13:41.495 "name": "BaseBdev3", 00:13:41.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.495 "is_configured": false, 00:13:41.495 "data_offset": 0, 00:13:41.495 "data_size": 0 00:13:41.495 } 00:13:41.495 ] 00:13:41.495 }' 00:13:41.495 12:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.495 12:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.063 12:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:42.063 12:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.063 12:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.063 [2024-11-25 12:12:37.931376] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:42.063 [2024-11-25 12:12:37.931444] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:42.063 12:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.063 12:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:42.063 12:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.063 12:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.063 [2024-11-25 12:12:37.939416] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:42.063 [2024-11-25 12:12:37.941804] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:42.063 [2024-11-25 12:12:37.941862] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:42.063 [2024-11-25 12:12:37.941880] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:42.063 [2024-11-25 12:12:37.941896] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:42.063 12:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.063 12:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:42.063 12:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:42.063 12:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:42.063 12:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:42.063 12:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:42.063 12:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:42.063 12:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:42.063 12:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:42.063 12:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.063 12:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.063 12:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.063 12:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.063 12:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.063 12:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.063 12:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.063 12:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:42.063 12:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.063 12:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.063 "name": "Existed_Raid", 00:13:42.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.063 "strip_size_kb": 64, 00:13:42.063 "state": "configuring", 00:13:42.063 "raid_level": "concat", 00:13:42.063 "superblock": false, 00:13:42.063 "num_base_bdevs": 3, 00:13:42.063 "num_base_bdevs_discovered": 1, 00:13:42.063 "num_base_bdevs_operational": 3, 00:13:42.063 "base_bdevs_list": [ 00:13:42.063 { 00:13:42.063 "name": "BaseBdev1", 00:13:42.063 "uuid": "8fd7a3df-6c10-45bc-ad65-51ad5d4f01d6", 00:13:42.063 "is_configured": true, 00:13:42.063 "data_offset": 0, 00:13:42.063 "data_size": 65536 00:13:42.063 }, 00:13:42.063 { 00:13:42.063 "name": "BaseBdev2", 00:13:42.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.063 "is_configured": false, 00:13:42.063 "data_offset": 0, 00:13:42.063 "data_size": 0 00:13:42.063 }, 00:13:42.063 { 00:13:42.063 "name": "BaseBdev3", 00:13:42.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.063 "is_configured": false, 00:13:42.063 "data_offset": 0, 00:13:42.063 "data_size": 0 00:13:42.063 } 00:13:42.063 ] 00:13:42.063 }' 00:13:42.063 12:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.063 12:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.631 12:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:42.631 12:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.631 12:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.631 [2024-11-25 12:12:38.481832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:42.631 BaseBdev2 00:13:42.631 12:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.631 12:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:42.631 12:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:42.631 12:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:42.631 12:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:42.631 12:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:42.631 12:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:42.631 12:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:42.631 12:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.631 12:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.631 12:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.631 12:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:42.631 12:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.631 12:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.631 [ 00:13:42.631 { 00:13:42.632 "name": "BaseBdev2", 00:13:42.632 "aliases": [ 00:13:42.632 "68f9e6a0-2e3b-4f6a-99e7-6ac858c08550" 00:13:42.632 ], 00:13:42.632 "product_name": "Malloc disk", 00:13:42.632 "block_size": 512, 00:13:42.632 "num_blocks": 65536, 00:13:42.632 "uuid": "68f9e6a0-2e3b-4f6a-99e7-6ac858c08550", 00:13:42.632 "assigned_rate_limits": { 00:13:42.632 "rw_ios_per_sec": 0, 00:13:42.632 "rw_mbytes_per_sec": 0, 00:13:42.632 "r_mbytes_per_sec": 0, 00:13:42.632 "w_mbytes_per_sec": 0 00:13:42.632 }, 00:13:42.632 "claimed": true, 00:13:42.632 "claim_type": "exclusive_write", 00:13:42.632 "zoned": false, 00:13:42.632 "supported_io_types": { 00:13:42.632 "read": true, 00:13:42.632 "write": true, 00:13:42.632 "unmap": true, 00:13:42.632 "flush": true, 00:13:42.632 "reset": true, 00:13:42.632 "nvme_admin": false, 00:13:42.632 "nvme_io": false, 00:13:42.632 "nvme_io_md": false, 00:13:42.632 "write_zeroes": true, 00:13:42.632 "zcopy": true, 00:13:42.632 "get_zone_info": false, 00:13:42.632 "zone_management": false, 00:13:42.632 "zone_append": false, 00:13:42.632 "compare": false, 00:13:42.632 "compare_and_write": false, 00:13:42.632 "abort": true, 00:13:42.632 "seek_hole": false, 00:13:42.632 "seek_data": false, 00:13:42.632 "copy": true, 00:13:42.632 "nvme_iov_md": false 00:13:42.632 }, 00:13:42.632 "memory_domains": [ 00:13:42.632 { 00:13:42.632 "dma_device_id": "system", 00:13:42.632 "dma_device_type": 1 00:13:42.632 }, 00:13:42.632 { 00:13:42.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.632 "dma_device_type": 2 00:13:42.632 } 00:13:42.632 ], 00:13:42.632 "driver_specific": {} 00:13:42.632 } 00:13:42.632 ] 00:13:42.632 12:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.632 12:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:42.632 12:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:42.632 12:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:42.632 12:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:42.632 12:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:42.632 12:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:42.632 12:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:42.632 12:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:42.632 12:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:42.632 12:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.632 12:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.632 12:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.632 12:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.632 12:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.632 12:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.632 12:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:42.632 12:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.632 12:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.632 12:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.632 "name": "Existed_Raid", 00:13:42.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.632 "strip_size_kb": 64, 00:13:42.632 "state": "configuring", 00:13:42.632 "raid_level": "concat", 00:13:42.632 "superblock": false, 00:13:42.632 "num_base_bdevs": 3, 00:13:42.632 "num_base_bdevs_discovered": 2, 00:13:42.632 "num_base_bdevs_operational": 3, 00:13:42.632 "base_bdevs_list": [ 00:13:42.632 { 00:13:42.632 "name": "BaseBdev1", 00:13:42.632 "uuid": "8fd7a3df-6c10-45bc-ad65-51ad5d4f01d6", 00:13:42.632 "is_configured": true, 00:13:42.632 "data_offset": 0, 00:13:42.632 "data_size": 65536 00:13:42.632 }, 00:13:42.632 { 00:13:42.632 "name": "BaseBdev2", 00:13:42.632 "uuid": "68f9e6a0-2e3b-4f6a-99e7-6ac858c08550", 00:13:42.632 "is_configured": true, 00:13:42.632 "data_offset": 0, 00:13:42.632 "data_size": 65536 00:13:42.632 }, 00:13:42.632 { 00:13:42.632 "name": "BaseBdev3", 00:13:42.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.632 "is_configured": false, 00:13:42.632 "data_offset": 0, 00:13:42.632 "data_size": 0 00:13:42.632 } 00:13:42.632 ] 00:13:42.632 }' 00:13:42.632 12:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.632 12:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.200 12:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:43.200 12:12:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.200 12:12:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.200 [2024-11-25 12:12:39.085335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:43.200 [2024-11-25 12:12:39.085602] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:43.200 [2024-11-25 12:12:39.085669] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:13:43.200 [2024-11-25 12:12:39.086199] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:43.200 [2024-11-25 12:12:39.086584] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:43.200 [2024-11-25 12:12:39.086725] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:43.200 [2024-11-25 12:12:39.087224] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:43.200 BaseBdev3 00:13:43.200 12:12:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.200 12:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:43.200 12:12:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:43.200 12:12:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:43.200 12:12:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:43.200 12:12:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:43.200 12:12:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:43.200 12:12:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:43.200 12:12:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.200 12:12:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.200 12:12:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.200 12:12:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:43.200 12:12:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.200 12:12:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.200 [ 00:13:43.200 { 00:13:43.200 "name": "BaseBdev3", 00:13:43.200 "aliases": [ 00:13:43.200 "f482b730-dc46-4927-9426-5c8f05b9d05e" 00:13:43.200 ], 00:13:43.200 "product_name": "Malloc disk", 00:13:43.200 "block_size": 512, 00:13:43.200 "num_blocks": 65536, 00:13:43.200 "uuid": "f482b730-dc46-4927-9426-5c8f05b9d05e", 00:13:43.200 "assigned_rate_limits": { 00:13:43.200 "rw_ios_per_sec": 0, 00:13:43.200 "rw_mbytes_per_sec": 0, 00:13:43.200 "r_mbytes_per_sec": 0, 00:13:43.200 "w_mbytes_per_sec": 0 00:13:43.200 }, 00:13:43.200 "claimed": true, 00:13:43.200 "claim_type": "exclusive_write", 00:13:43.200 "zoned": false, 00:13:43.200 "supported_io_types": { 00:13:43.201 "read": true, 00:13:43.201 "write": true, 00:13:43.201 "unmap": true, 00:13:43.201 "flush": true, 00:13:43.201 "reset": true, 00:13:43.201 "nvme_admin": false, 00:13:43.201 "nvme_io": false, 00:13:43.201 "nvme_io_md": false, 00:13:43.201 "write_zeroes": true, 00:13:43.201 "zcopy": true, 00:13:43.201 "get_zone_info": false, 00:13:43.201 "zone_management": false, 00:13:43.201 "zone_append": false, 00:13:43.201 "compare": false, 00:13:43.201 "compare_and_write": false, 00:13:43.201 "abort": true, 00:13:43.201 "seek_hole": false, 00:13:43.201 "seek_data": false, 00:13:43.201 "copy": true, 00:13:43.201 "nvme_iov_md": false 00:13:43.201 }, 00:13:43.201 "memory_domains": [ 00:13:43.201 { 00:13:43.201 "dma_device_id": "system", 00:13:43.201 "dma_device_type": 1 00:13:43.201 }, 00:13:43.201 { 00:13:43.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.201 "dma_device_type": 2 00:13:43.201 } 00:13:43.201 ], 00:13:43.201 "driver_specific": {} 00:13:43.201 } 00:13:43.201 ] 00:13:43.201 12:12:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.201 12:12:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:43.201 12:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:43.201 12:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:43.201 12:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:13:43.201 12:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:43.201 12:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:43.201 12:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:43.201 12:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:43.201 12:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:43.201 12:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.201 12:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.201 12:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.201 12:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.201 12:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:43.201 12:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.201 12:12:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.201 12:12:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.201 12:12:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.201 12:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.201 "name": "Existed_Raid", 00:13:43.201 "uuid": "990780c0-8f15-4849-b7b5-c79739a9170e", 00:13:43.201 "strip_size_kb": 64, 00:13:43.201 "state": "online", 00:13:43.201 "raid_level": "concat", 00:13:43.201 "superblock": false, 00:13:43.201 "num_base_bdevs": 3, 00:13:43.201 "num_base_bdevs_discovered": 3, 00:13:43.201 "num_base_bdevs_operational": 3, 00:13:43.201 "base_bdevs_list": [ 00:13:43.201 { 00:13:43.201 "name": "BaseBdev1", 00:13:43.201 "uuid": "8fd7a3df-6c10-45bc-ad65-51ad5d4f01d6", 00:13:43.201 "is_configured": true, 00:13:43.201 "data_offset": 0, 00:13:43.201 "data_size": 65536 00:13:43.201 }, 00:13:43.201 { 00:13:43.201 "name": "BaseBdev2", 00:13:43.201 "uuid": "68f9e6a0-2e3b-4f6a-99e7-6ac858c08550", 00:13:43.201 "is_configured": true, 00:13:43.201 "data_offset": 0, 00:13:43.201 "data_size": 65536 00:13:43.201 }, 00:13:43.201 { 00:13:43.201 "name": "BaseBdev3", 00:13:43.201 "uuid": "f482b730-dc46-4927-9426-5c8f05b9d05e", 00:13:43.201 "is_configured": true, 00:13:43.201 "data_offset": 0, 00:13:43.201 "data_size": 65536 00:13:43.201 } 00:13:43.201 ] 00:13:43.201 }' 00:13:43.201 12:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.201 12:12:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.768 12:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:43.768 12:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:43.768 12:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:43.768 12:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:43.768 12:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:43.768 12:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:43.768 12:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:43.768 12:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:43.768 12:12:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.768 12:12:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.768 [2024-11-25 12:12:39.653910] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:43.768 12:12:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.769 12:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:43.769 "name": "Existed_Raid", 00:13:43.769 "aliases": [ 00:13:43.769 "990780c0-8f15-4849-b7b5-c79739a9170e" 00:13:43.769 ], 00:13:43.769 "product_name": "Raid Volume", 00:13:43.769 "block_size": 512, 00:13:43.769 "num_blocks": 196608, 00:13:43.769 "uuid": "990780c0-8f15-4849-b7b5-c79739a9170e", 00:13:43.769 "assigned_rate_limits": { 00:13:43.769 "rw_ios_per_sec": 0, 00:13:43.769 "rw_mbytes_per_sec": 0, 00:13:43.769 "r_mbytes_per_sec": 0, 00:13:43.769 "w_mbytes_per_sec": 0 00:13:43.769 }, 00:13:43.769 "claimed": false, 00:13:43.769 "zoned": false, 00:13:43.769 "supported_io_types": { 00:13:43.769 "read": true, 00:13:43.769 "write": true, 00:13:43.769 "unmap": true, 00:13:43.769 "flush": true, 00:13:43.769 "reset": true, 00:13:43.769 "nvme_admin": false, 00:13:43.769 "nvme_io": false, 00:13:43.769 "nvme_io_md": false, 00:13:43.769 "write_zeroes": true, 00:13:43.769 "zcopy": false, 00:13:43.769 "get_zone_info": false, 00:13:43.769 "zone_management": false, 00:13:43.769 "zone_append": false, 00:13:43.769 "compare": false, 00:13:43.769 "compare_and_write": false, 00:13:43.769 "abort": false, 00:13:43.769 "seek_hole": false, 00:13:43.769 "seek_data": false, 00:13:43.769 "copy": false, 00:13:43.769 "nvme_iov_md": false 00:13:43.769 }, 00:13:43.769 "memory_domains": [ 00:13:43.769 { 00:13:43.769 "dma_device_id": "system", 00:13:43.769 "dma_device_type": 1 00:13:43.769 }, 00:13:43.769 { 00:13:43.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.769 "dma_device_type": 2 00:13:43.769 }, 00:13:43.769 { 00:13:43.769 "dma_device_id": "system", 00:13:43.769 "dma_device_type": 1 00:13:43.769 }, 00:13:43.769 { 00:13:43.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.769 "dma_device_type": 2 00:13:43.769 }, 00:13:43.769 { 00:13:43.769 "dma_device_id": "system", 00:13:43.769 "dma_device_type": 1 00:13:43.769 }, 00:13:43.769 { 00:13:43.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.769 "dma_device_type": 2 00:13:43.769 } 00:13:43.769 ], 00:13:43.769 "driver_specific": { 00:13:43.769 "raid": { 00:13:43.769 "uuid": "990780c0-8f15-4849-b7b5-c79739a9170e", 00:13:43.769 "strip_size_kb": 64, 00:13:43.769 "state": "online", 00:13:43.769 "raid_level": "concat", 00:13:43.769 "superblock": false, 00:13:43.769 "num_base_bdevs": 3, 00:13:43.769 "num_base_bdevs_discovered": 3, 00:13:43.769 "num_base_bdevs_operational": 3, 00:13:43.769 "base_bdevs_list": [ 00:13:43.769 { 00:13:43.769 "name": "BaseBdev1", 00:13:43.769 "uuid": "8fd7a3df-6c10-45bc-ad65-51ad5d4f01d6", 00:13:43.769 "is_configured": true, 00:13:43.769 "data_offset": 0, 00:13:43.769 "data_size": 65536 00:13:43.769 }, 00:13:43.769 { 00:13:43.769 "name": "BaseBdev2", 00:13:43.769 "uuid": "68f9e6a0-2e3b-4f6a-99e7-6ac858c08550", 00:13:43.769 "is_configured": true, 00:13:43.769 "data_offset": 0, 00:13:43.769 "data_size": 65536 00:13:43.769 }, 00:13:43.769 { 00:13:43.769 "name": "BaseBdev3", 00:13:43.769 "uuid": "f482b730-dc46-4927-9426-5c8f05b9d05e", 00:13:43.769 "is_configured": true, 00:13:43.769 "data_offset": 0, 00:13:43.769 "data_size": 65536 00:13:43.769 } 00:13:43.769 ] 00:13:43.769 } 00:13:43.769 } 00:13:43.769 }' 00:13:43.769 12:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:43.769 12:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:43.769 BaseBdev2 00:13:43.769 BaseBdev3' 00:13:43.769 12:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:43.769 12:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:43.769 12:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:43.769 12:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:43.769 12:12:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.769 12:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:43.769 12:12:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.769 12:12:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.769 12:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:43.769 12:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:43.769 12:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:43.769 12:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:43.769 12:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:43.769 12:12:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.769 12:12:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.028 12:12:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.028 12:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:44.028 12:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:44.028 12:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:44.028 12:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:44.028 12:12:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.028 12:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:44.028 12:12:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.028 12:12:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.028 12:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:44.028 12:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:44.028 12:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:44.028 12:12:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.028 12:12:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.028 [2024-11-25 12:12:39.957686] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:44.028 [2024-11-25 12:12:39.957722] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:44.028 [2024-11-25 12:12:39.957795] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:44.028 12:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.028 12:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:44.028 12:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:13:44.028 12:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:44.028 12:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:44.028 12:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:13:44.028 12:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:13:44.028 12:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:44.028 12:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:13:44.028 12:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:44.028 12:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:44.028 12:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:44.028 12:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.028 12:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.028 12:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.028 12:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.028 12:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:44.028 12:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.028 12:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.028 12:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.028 12:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.028 12:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.028 "name": "Existed_Raid", 00:13:44.028 "uuid": "990780c0-8f15-4849-b7b5-c79739a9170e", 00:13:44.028 "strip_size_kb": 64, 00:13:44.028 "state": "offline", 00:13:44.028 "raid_level": "concat", 00:13:44.028 "superblock": false, 00:13:44.028 "num_base_bdevs": 3, 00:13:44.028 "num_base_bdevs_discovered": 2, 00:13:44.028 "num_base_bdevs_operational": 2, 00:13:44.028 "base_bdevs_list": [ 00:13:44.028 { 00:13:44.028 "name": null, 00:13:44.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.028 "is_configured": false, 00:13:44.028 "data_offset": 0, 00:13:44.028 "data_size": 65536 00:13:44.028 }, 00:13:44.028 { 00:13:44.028 "name": "BaseBdev2", 00:13:44.028 "uuid": "68f9e6a0-2e3b-4f6a-99e7-6ac858c08550", 00:13:44.028 "is_configured": true, 00:13:44.028 "data_offset": 0, 00:13:44.028 "data_size": 65536 00:13:44.028 }, 00:13:44.028 { 00:13:44.028 "name": "BaseBdev3", 00:13:44.028 "uuid": "f482b730-dc46-4927-9426-5c8f05b9d05e", 00:13:44.028 "is_configured": true, 00:13:44.028 "data_offset": 0, 00:13:44.028 "data_size": 65536 00:13:44.028 } 00:13:44.028 ] 00:13:44.028 }' 00:13:44.028 12:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.028 12:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.595 12:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:44.595 12:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:44.595 12:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.595 12:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:44.595 12:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.595 12:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.595 12:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.595 12:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:44.595 12:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:44.595 12:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:44.595 12:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.595 12:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.595 [2024-11-25 12:12:40.597561] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:44.854 12:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.854 12:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:44.854 12:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:44.854 12:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.854 12:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:44.854 12:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.854 12:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.854 12:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.854 12:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:44.854 12:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:44.854 12:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:44.854 12:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.854 12:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.854 [2024-11-25 12:12:40.736798] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:44.854 [2024-11-25 12:12:40.736860] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:44.854 12:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.854 12:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:44.854 12:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:44.854 12:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:44.854 12:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.854 12:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.854 12:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.854 12:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.854 12:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:44.854 12:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:44.854 12:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:13:44.854 12:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:44.854 12:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:44.854 12:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:44.854 12:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.854 12:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.854 BaseBdev2 00:13:44.854 12:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.854 12:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:44.854 12:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:44.854 12:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:44.854 12:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:44.854 12:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:44.854 12:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:44.854 12:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:44.854 12:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.854 12:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.854 12:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.854 12:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:44.854 12:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.854 12:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.113 [ 00:13:45.113 { 00:13:45.113 "name": "BaseBdev2", 00:13:45.113 "aliases": [ 00:13:45.113 "da2f52b3-6f73-40d7-b118-ab1b8459b70f" 00:13:45.113 ], 00:13:45.113 "product_name": "Malloc disk", 00:13:45.113 "block_size": 512, 00:13:45.113 "num_blocks": 65536, 00:13:45.113 "uuid": "da2f52b3-6f73-40d7-b118-ab1b8459b70f", 00:13:45.113 "assigned_rate_limits": { 00:13:45.113 "rw_ios_per_sec": 0, 00:13:45.113 "rw_mbytes_per_sec": 0, 00:13:45.113 "r_mbytes_per_sec": 0, 00:13:45.113 "w_mbytes_per_sec": 0 00:13:45.113 }, 00:13:45.113 "claimed": false, 00:13:45.113 "zoned": false, 00:13:45.113 "supported_io_types": { 00:13:45.113 "read": true, 00:13:45.113 "write": true, 00:13:45.113 "unmap": true, 00:13:45.113 "flush": true, 00:13:45.113 "reset": true, 00:13:45.113 "nvme_admin": false, 00:13:45.113 "nvme_io": false, 00:13:45.113 "nvme_io_md": false, 00:13:45.113 "write_zeroes": true, 00:13:45.113 "zcopy": true, 00:13:45.113 "get_zone_info": false, 00:13:45.113 "zone_management": false, 00:13:45.113 "zone_append": false, 00:13:45.113 "compare": false, 00:13:45.113 "compare_and_write": false, 00:13:45.113 "abort": true, 00:13:45.113 "seek_hole": false, 00:13:45.113 "seek_data": false, 00:13:45.113 "copy": true, 00:13:45.113 "nvme_iov_md": false 00:13:45.113 }, 00:13:45.113 "memory_domains": [ 00:13:45.113 { 00:13:45.113 "dma_device_id": "system", 00:13:45.113 "dma_device_type": 1 00:13:45.113 }, 00:13:45.113 { 00:13:45.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:45.113 "dma_device_type": 2 00:13:45.113 } 00:13:45.113 ], 00:13:45.113 "driver_specific": {} 00:13:45.113 } 00:13:45.113 ] 00:13:45.113 12:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.113 12:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:45.113 12:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:45.113 12:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:45.113 12:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:45.113 12:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.113 12:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.113 BaseBdev3 00:13:45.113 12:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.113 12:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:45.113 12:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:45.113 12:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:45.113 12:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:45.113 12:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:45.113 12:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:45.113 12:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:45.113 12:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.113 12:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.113 12:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.113 12:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:45.113 12:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.113 12:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.113 [ 00:13:45.113 { 00:13:45.113 "name": "BaseBdev3", 00:13:45.113 "aliases": [ 00:13:45.113 "fb7afc8f-b8dc-4582-aa62-4d51391b9e00" 00:13:45.113 ], 00:13:45.113 "product_name": "Malloc disk", 00:13:45.113 "block_size": 512, 00:13:45.113 "num_blocks": 65536, 00:13:45.113 "uuid": "fb7afc8f-b8dc-4582-aa62-4d51391b9e00", 00:13:45.113 "assigned_rate_limits": { 00:13:45.113 "rw_ios_per_sec": 0, 00:13:45.113 "rw_mbytes_per_sec": 0, 00:13:45.113 "r_mbytes_per_sec": 0, 00:13:45.113 "w_mbytes_per_sec": 0 00:13:45.113 }, 00:13:45.113 "claimed": false, 00:13:45.113 "zoned": false, 00:13:45.113 "supported_io_types": { 00:13:45.113 "read": true, 00:13:45.113 "write": true, 00:13:45.113 "unmap": true, 00:13:45.113 "flush": true, 00:13:45.113 "reset": true, 00:13:45.113 "nvme_admin": false, 00:13:45.113 "nvme_io": false, 00:13:45.113 "nvme_io_md": false, 00:13:45.113 "write_zeroes": true, 00:13:45.113 "zcopy": true, 00:13:45.113 "get_zone_info": false, 00:13:45.113 "zone_management": false, 00:13:45.113 "zone_append": false, 00:13:45.113 "compare": false, 00:13:45.113 "compare_and_write": false, 00:13:45.113 "abort": true, 00:13:45.113 "seek_hole": false, 00:13:45.113 "seek_data": false, 00:13:45.113 "copy": true, 00:13:45.113 "nvme_iov_md": false 00:13:45.113 }, 00:13:45.113 "memory_domains": [ 00:13:45.113 { 00:13:45.113 "dma_device_id": "system", 00:13:45.113 "dma_device_type": 1 00:13:45.113 }, 00:13:45.113 { 00:13:45.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:45.113 "dma_device_type": 2 00:13:45.113 } 00:13:45.113 ], 00:13:45.113 "driver_specific": {} 00:13:45.113 } 00:13:45.113 ] 00:13:45.113 12:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.113 12:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:45.113 12:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:45.113 12:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:45.113 12:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:45.113 12:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.113 12:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.113 [2024-11-25 12:12:41.036435] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:45.113 [2024-11-25 12:12:41.036630] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:45.113 [2024-11-25 12:12:41.036685] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:45.113 [2024-11-25 12:12:41.039184] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:45.113 12:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.113 12:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:45.113 12:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:45.113 12:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:45.113 12:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:45.113 12:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:45.113 12:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:45.113 12:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.113 12:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.113 12:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.113 12:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.113 12:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.113 12:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.113 12:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.113 12:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:45.113 12:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.113 12:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.113 "name": "Existed_Raid", 00:13:45.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.113 "strip_size_kb": 64, 00:13:45.113 "state": "configuring", 00:13:45.113 "raid_level": "concat", 00:13:45.113 "superblock": false, 00:13:45.113 "num_base_bdevs": 3, 00:13:45.113 "num_base_bdevs_discovered": 2, 00:13:45.114 "num_base_bdevs_operational": 3, 00:13:45.114 "base_bdevs_list": [ 00:13:45.114 { 00:13:45.114 "name": "BaseBdev1", 00:13:45.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.114 "is_configured": false, 00:13:45.114 "data_offset": 0, 00:13:45.114 "data_size": 0 00:13:45.114 }, 00:13:45.114 { 00:13:45.114 "name": "BaseBdev2", 00:13:45.114 "uuid": "da2f52b3-6f73-40d7-b118-ab1b8459b70f", 00:13:45.114 "is_configured": true, 00:13:45.114 "data_offset": 0, 00:13:45.114 "data_size": 65536 00:13:45.114 }, 00:13:45.114 { 00:13:45.114 "name": "BaseBdev3", 00:13:45.114 "uuid": "fb7afc8f-b8dc-4582-aa62-4d51391b9e00", 00:13:45.114 "is_configured": true, 00:13:45.114 "data_offset": 0, 00:13:45.114 "data_size": 65536 00:13:45.114 } 00:13:45.114 ] 00:13:45.114 }' 00:13:45.114 12:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.114 12:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.682 12:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:45.682 12:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.682 12:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.682 [2024-11-25 12:12:41.544540] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:45.682 12:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.682 12:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:45.682 12:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:45.682 12:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:45.682 12:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:45.682 12:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:45.682 12:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:45.682 12:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.682 12:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.682 12:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.682 12:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.682 12:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.682 12:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.682 12:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:45.682 12:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.682 12:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.682 12:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.682 "name": "Existed_Raid", 00:13:45.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.682 "strip_size_kb": 64, 00:13:45.682 "state": "configuring", 00:13:45.682 "raid_level": "concat", 00:13:45.682 "superblock": false, 00:13:45.682 "num_base_bdevs": 3, 00:13:45.682 "num_base_bdevs_discovered": 1, 00:13:45.682 "num_base_bdevs_operational": 3, 00:13:45.682 "base_bdevs_list": [ 00:13:45.682 { 00:13:45.682 "name": "BaseBdev1", 00:13:45.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.682 "is_configured": false, 00:13:45.682 "data_offset": 0, 00:13:45.682 "data_size": 0 00:13:45.682 }, 00:13:45.682 { 00:13:45.682 "name": null, 00:13:45.682 "uuid": "da2f52b3-6f73-40d7-b118-ab1b8459b70f", 00:13:45.682 "is_configured": false, 00:13:45.682 "data_offset": 0, 00:13:45.682 "data_size": 65536 00:13:45.682 }, 00:13:45.682 { 00:13:45.682 "name": "BaseBdev3", 00:13:45.682 "uuid": "fb7afc8f-b8dc-4582-aa62-4d51391b9e00", 00:13:45.682 "is_configured": true, 00:13:45.682 "data_offset": 0, 00:13:45.682 "data_size": 65536 00:13:45.682 } 00:13:45.682 ] 00:13:45.682 }' 00:13:45.682 12:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.682 12:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.249 12:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.249 12:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.249 12:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.249 12:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:46.249 12:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.249 12:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:46.249 12:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:46.249 12:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.249 12:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.249 [2024-11-25 12:12:42.162712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:46.249 BaseBdev1 00:13:46.249 12:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.249 12:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:46.249 12:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:46.249 12:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:46.249 12:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:46.249 12:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:46.249 12:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:46.249 12:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:46.249 12:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.249 12:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.249 12:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.249 12:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:46.249 12:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.249 12:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.249 [ 00:13:46.249 { 00:13:46.249 "name": "BaseBdev1", 00:13:46.249 "aliases": [ 00:13:46.249 "c3fe1f30-c39c-4a71-b9c3-a6e2d86d5adc" 00:13:46.249 ], 00:13:46.249 "product_name": "Malloc disk", 00:13:46.249 "block_size": 512, 00:13:46.249 "num_blocks": 65536, 00:13:46.249 "uuid": "c3fe1f30-c39c-4a71-b9c3-a6e2d86d5adc", 00:13:46.249 "assigned_rate_limits": { 00:13:46.249 "rw_ios_per_sec": 0, 00:13:46.249 "rw_mbytes_per_sec": 0, 00:13:46.249 "r_mbytes_per_sec": 0, 00:13:46.249 "w_mbytes_per_sec": 0 00:13:46.249 }, 00:13:46.249 "claimed": true, 00:13:46.249 "claim_type": "exclusive_write", 00:13:46.249 "zoned": false, 00:13:46.249 "supported_io_types": { 00:13:46.249 "read": true, 00:13:46.249 "write": true, 00:13:46.249 "unmap": true, 00:13:46.249 "flush": true, 00:13:46.249 "reset": true, 00:13:46.249 "nvme_admin": false, 00:13:46.249 "nvme_io": false, 00:13:46.249 "nvme_io_md": false, 00:13:46.249 "write_zeroes": true, 00:13:46.249 "zcopy": true, 00:13:46.249 "get_zone_info": false, 00:13:46.249 "zone_management": false, 00:13:46.249 "zone_append": false, 00:13:46.249 "compare": false, 00:13:46.249 "compare_and_write": false, 00:13:46.249 "abort": true, 00:13:46.249 "seek_hole": false, 00:13:46.249 "seek_data": false, 00:13:46.249 "copy": true, 00:13:46.249 "nvme_iov_md": false 00:13:46.249 }, 00:13:46.249 "memory_domains": [ 00:13:46.249 { 00:13:46.249 "dma_device_id": "system", 00:13:46.249 "dma_device_type": 1 00:13:46.249 }, 00:13:46.249 { 00:13:46.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:46.249 "dma_device_type": 2 00:13:46.249 } 00:13:46.249 ], 00:13:46.249 "driver_specific": {} 00:13:46.249 } 00:13:46.249 ] 00:13:46.249 12:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.249 12:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:46.249 12:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:46.249 12:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:46.249 12:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:46.249 12:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:46.249 12:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:46.249 12:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:46.249 12:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.249 12:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.249 12:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.249 12:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.249 12:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.249 12:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.249 12:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.249 12:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:46.249 12:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.249 12:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.249 "name": "Existed_Raid", 00:13:46.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.249 "strip_size_kb": 64, 00:13:46.249 "state": "configuring", 00:13:46.249 "raid_level": "concat", 00:13:46.249 "superblock": false, 00:13:46.249 "num_base_bdevs": 3, 00:13:46.249 "num_base_bdevs_discovered": 2, 00:13:46.249 "num_base_bdevs_operational": 3, 00:13:46.249 "base_bdevs_list": [ 00:13:46.249 { 00:13:46.249 "name": "BaseBdev1", 00:13:46.249 "uuid": "c3fe1f30-c39c-4a71-b9c3-a6e2d86d5adc", 00:13:46.249 "is_configured": true, 00:13:46.249 "data_offset": 0, 00:13:46.249 "data_size": 65536 00:13:46.249 }, 00:13:46.249 { 00:13:46.249 "name": null, 00:13:46.249 "uuid": "da2f52b3-6f73-40d7-b118-ab1b8459b70f", 00:13:46.249 "is_configured": false, 00:13:46.249 "data_offset": 0, 00:13:46.249 "data_size": 65536 00:13:46.249 }, 00:13:46.249 { 00:13:46.249 "name": "BaseBdev3", 00:13:46.249 "uuid": "fb7afc8f-b8dc-4582-aa62-4d51391b9e00", 00:13:46.249 "is_configured": true, 00:13:46.249 "data_offset": 0, 00:13:46.249 "data_size": 65536 00:13:46.249 } 00:13:46.249 ] 00:13:46.249 }' 00:13:46.249 12:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.249 12:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.818 12:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:46.818 12:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.818 12:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.818 12:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.818 12:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.818 12:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:46.818 12:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:46.818 12:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.818 12:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.818 [2024-11-25 12:12:42.766905] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:46.818 12:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.818 12:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:46.818 12:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:46.818 12:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:46.818 12:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:46.818 12:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:46.818 12:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:46.818 12:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.818 12:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.818 12:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.818 12:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.818 12:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.818 12:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.818 12:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:46.818 12:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.818 12:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.818 12:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.818 "name": "Existed_Raid", 00:13:46.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.818 "strip_size_kb": 64, 00:13:46.818 "state": "configuring", 00:13:46.818 "raid_level": "concat", 00:13:46.818 "superblock": false, 00:13:46.818 "num_base_bdevs": 3, 00:13:46.818 "num_base_bdevs_discovered": 1, 00:13:46.818 "num_base_bdevs_operational": 3, 00:13:46.818 "base_bdevs_list": [ 00:13:46.818 { 00:13:46.818 "name": "BaseBdev1", 00:13:46.818 "uuid": "c3fe1f30-c39c-4a71-b9c3-a6e2d86d5adc", 00:13:46.818 "is_configured": true, 00:13:46.818 "data_offset": 0, 00:13:46.818 "data_size": 65536 00:13:46.818 }, 00:13:46.818 { 00:13:46.818 "name": null, 00:13:46.818 "uuid": "da2f52b3-6f73-40d7-b118-ab1b8459b70f", 00:13:46.818 "is_configured": false, 00:13:46.818 "data_offset": 0, 00:13:46.818 "data_size": 65536 00:13:46.818 }, 00:13:46.818 { 00:13:46.818 "name": null, 00:13:46.818 "uuid": "fb7afc8f-b8dc-4582-aa62-4d51391b9e00", 00:13:46.818 "is_configured": false, 00:13:46.818 "data_offset": 0, 00:13:46.818 "data_size": 65536 00:13:46.818 } 00:13:46.818 ] 00:13:46.818 }' 00:13:46.818 12:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.818 12:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.385 12:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:47.385 12:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.385 12:12:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.385 12:12:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.385 12:12:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.385 12:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:47.385 12:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:47.385 12:12:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.385 12:12:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.385 [2024-11-25 12:12:43.331057] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:47.385 12:12:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.386 12:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:47.386 12:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:47.386 12:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:47.386 12:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:47.386 12:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:47.386 12:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:47.386 12:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.386 12:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.386 12:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.386 12:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.386 12:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.386 12:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:47.386 12:12:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.386 12:12:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.386 12:12:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.386 12:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.386 "name": "Existed_Raid", 00:13:47.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.386 "strip_size_kb": 64, 00:13:47.386 "state": "configuring", 00:13:47.386 "raid_level": "concat", 00:13:47.386 "superblock": false, 00:13:47.386 "num_base_bdevs": 3, 00:13:47.386 "num_base_bdevs_discovered": 2, 00:13:47.386 "num_base_bdevs_operational": 3, 00:13:47.386 "base_bdevs_list": [ 00:13:47.386 { 00:13:47.386 "name": "BaseBdev1", 00:13:47.386 "uuid": "c3fe1f30-c39c-4a71-b9c3-a6e2d86d5adc", 00:13:47.386 "is_configured": true, 00:13:47.386 "data_offset": 0, 00:13:47.386 "data_size": 65536 00:13:47.386 }, 00:13:47.386 { 00:13:47.386 "name": null, 00:13:47.386 "uuid": "da2f52b3-6f73-40d7-b118-ab1b8459b70f", 00:13:47.386 "is_configured": false, 00:13:47.386 "data_offset": 0, 00:13:47.386 "data_size": 65536 00:13:47.386 }, 00:13:47.386 { 00:13:47.386 "name": "BaseBdev3", 00:13:47.386 "uuid": "fb7afc8f-b8dc-4582-aa62-4d51391b9e00", 00:13:47.386 "is_configured": true, 00:13:47.386 "data_offset": 0, 00:13:47.386 "data_size": 65536 00:13:47.386 } 00:13:47.386 ] 00:13:47.386 }' 00:13:47.386 12:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.386 12:12:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.953 12:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.953 12:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:47.953 12:12:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.953 12:12:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.953 12:12:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.953 12:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:47.954 12:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:47.954 12:12:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.954 12:12:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.954 [2024-11-25 12:12:43.911250] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:47.954 12:12:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.954 12:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:47.954 12:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:47.954 12:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:47.954 12:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:47.954 12:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:47.954 12:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:47.954 12:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.954 12:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.954 12:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.954 12:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.954 12:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:47.954 12:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.954 12:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.954 12:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.954 12:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.213 12:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.213 "name": "Existed_Raid", 00:13:48.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.213 "strip_size_kb": 64, 00:13:48.213 "state": "configuring", 00:13:48.213 "raid_level": "concat", 00:13:48.213 "superblock": false, 00:13:48.213 "num_base_bdevs": 3, 00:13:48.213 "num_base_bdevs_discovered": 1, 00:13:48.213 "num_base_bdevs_operational": 3, 00:13:48.213 "base_bdevs_list": [ 00:13:48.213 { 00:13:48.213 "name": null, 00:13:48.213 "uuid": "c3fe1f30-c39c-4a71-b9c3-a6e2d86d5adc", 00:13:48.213 "is_configured": false, 00:13:48.213 "data_offset": 0, 00:13:48.213 "data_size": 65536 00:13:48.213 }, 00:13:48.213 { 00:13:48.213 "name": null, 00:13:48.213 "uuid": "da2f52b3-6f73-40d7-b118-ab1b8459b70f", 00:13:48.213 "is_configured": false, 00:13:48.213 "data_offset": 0, 00:13:48.213 "data_size": 65536 00:13:48.213 }, 00:13:48.213 { 00:13:48.213 "name": "BaseBdev3", 00:13:48.213 "uuid": "fb7afc8f-b8dc-4582-aa62-4d51391b9e00", 00:13:48.213 "is_configured": true, 00:13:48.213 "data_offset": 0, 00:13:48.213 "data_size": 65536 00:13:48.213 } 00:13:48.213 ] 00:13:48.213 }' 00:13:48.213 12:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.213 12:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.471 12:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.471 12:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.471 12:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.471 12:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:48.471 12:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.471 12:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:48.729 12:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:48.729 12:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.729 12:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.729 [2024-11-25 12:12:44.564099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:48.729 12:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.729 12:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:48.729 12:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:48.729 12:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:48.729 12:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:48.729 12:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:48.729 12:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:48.729 12:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.729 12:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.729 12:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.729 12:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.729 12:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.729 12:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.729 12:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:48.729 12:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.729 12:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.729 12:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.729 "name": "Existed_Raid", 00:13:48.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.729 "strip_size_kb": 64, 00:13:48.729 "state": "configuring", 00:13:48.729 "raid_level": "concat", 00:13:48.729 "superblock": false, 00:13:48.729 "num_base_bdevs": 3, 00:13:48.729 "num_base_bdevs_discovered": 2, 00:13:48.729 "num_base_bdevs_operational": 3, 00:13:48.729 "base_bdevs_list": [ 00:13:48.729 { 00:13:48.729 "name": null, 00:13:48.729 "uuid": "c3fe1f30-c39c-4a71-b9c3-a6e2d86d5adc", 00:13:48.729 "is_configured": false, 00:13:48.729 "data_offset": 0, 00:13:48.729 "data_size": 65536 00:13:48.729 }, 00:13:48.729 { 00:13:48.729 "name": "BaseBdev2", 00:13:48.729 "uuid": "da2f52b3-6f73-40d7-b118-ab1b8459b70f", 00:13:48.729 "is_configured": true, 00:13:48.729 "data_offset": 0, 00:13:48.729 "data_size": 65536 00:13:48.729 }, 00:13:48.729 { 00:13:48.729 "name": "BaseBdev3", 00:13:48.729 "uuid": "fb7afc8f-b8dc-4582-aa62-4d51391b9e00", 00:13:48.729 "is_configured": true, 00:13:48.729 "data_offset": 0, 00:13:48.729 "data_size": 65536 00:13:48.729 } 00:13:48.729 ] 00:13:48.729 }' 00:13:48.729 12:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.729 12:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.012 12:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.012 12:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:49.012 12:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.012 12:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.012 12:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.292 12:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:49.292 12:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.292 12:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:49.292 12:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.292 12:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.292 12:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.292 12:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c3fe1f30-c39c-4a71-b9c3-a6e2d86d5adc 00:13:49.292 12:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.292 12:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.292 [2024-11-25 12:12:45.206326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:49.292 [2024-11-25 12:12:45.206400] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:49.292 [2024-11-25 12:12:45.206417] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:13:49.292 [2024-11-25 12:12:45.206726] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:49.292 [2024-11-25 12:12:45.206916] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:49.292 [2024-11-25 12:12:45.206940] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:49.292 [2024-11-25 12:12:45.207248] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:49.292 NewBaseBdev 00:13:49.292 12:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.292 12:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:49.292 12:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:49.292 12:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:49.292 12:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:49.292 12:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:49.292 12:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:49.292 12:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:49.292 12:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.292 12:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.292 12:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.292 12:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:49.292 12:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.292 12:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.292 [ 00:13:49.292 { 00:13:49.292 "name": "NewBaseBdev", 00:13:49.292 "aliases": [ 00:13:49.292 "c3fe1f30-c39c-4a71-b9c3-a6e2d86d5adc" 00:13:49.292 ], 00:13:49.292 "product_name": "Malloc disk", 00:13:49.292 "block_size": 512, 00:13:49.292 "num_blocks": 65536, 00:13:49.292 "uuid": "c3fe1f30-c39c-4a71-b9c3-a6e2d86d5adc", 00:13:49.292 "assigned_rate_limits": { 00:13:49.292 "rw_ios_per_sec": 0, 00:13:49.292 "rw_mbytes_per_sec": 0, 00:13:49.292 "r_mbytes_per_sec": 0, 00:13:49.292 "w_mbytes_per_sec": 0 00:13:49.292 }, 00:13:49.292 "claimed": true, 00:13:49.292 "claim_type": "exclusive_write", 00:13:49.292 "zoned": false, 00:13:49.292 "supported_io_types": { 00:13:49.292 "read": true, 00:13:49.292 "write": true, 00:13:49.292 "unmap": true, 00:13:49.292 "flush": true, 00:13:49.292 "reset": true, 00:13:49.292 "nvme_admin": false, 00:13:49.292 "nvme_io": false, 00:13:49.292 "nvme_io_md": false, 00:13:49.292 "write_zeroes": true, 00:13:49.292 "zcopy": true, 00:13:49.292 "get_zone_info": false, 00:13:49.292 "zone_management": false, 00:13:49.292 "zone_append": false, 00:13:49.292 "compare": false, 00:13:49.292 "compare_and_write": false, 00:13:49.292 "abort": true, 00:13:49.292 "seek_hole": false, 00:13:49.292 "seek_data": false, 00:13:49.292 "copy": true, 00:13:49.292 "nvme_iov_md": false 00:13:49.292 }, 00:13:49.292 "memory_domains": [ 00:13:49.292 { 00:13:49.292 "dma_device_id": "system", 00:13:49.292 "dma_device_type": 1 00:13:49.292 }, 00:13:49.292 { 00:13:49.292 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.292 "dma_device_type": 2 00:13:49.292 } 00:13:49.292 ], 00:13:49.292 "driver_specific": {} 00:13:49.292 } 00:13:49.292 ] 00:13:49.292 12:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.292 12:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:49.292 12:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:13:49.292 12:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:49.292 12:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:49.292 12:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:49.292 12:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:49.292 12:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:49.292 12:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.292 12:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.292 12:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.292 12:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.292 12:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.292 12:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.292 12:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.292 12:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:49.292 12:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.292 12:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.292 "name": "Existed_Raid", 00:13:49.292 "uuid": "8316cb22-8fe7-4dc2-bbbd-1754a4e0d323", 00:13:49.292 "strip_size_kb": 64, 00:13:49.292 "state": "online", 00:13:49.292 "raid_level": "concat", 00:13:49.292 "superblock": false, 00:13:49.292 "num_base_bdevs": 3, 00:13:49.292 "num_base_bdevs_discovered": 3, 00:13:49.292 "num_base_bdevs_operational": 3, 00:13:49.292 "base_bdevs_list": [ 00:13:49.292 { 00:13:49.292 "name": "NewBaseBdev", 00:13:49.292 "uuid": "c3fe1f30-c39c-4a71-b9c3-a6e2d86d5adc", 00:13:49.292 "is_configured": true, 00:13:49.292 "data_offset": 0, 00:13:49.292 "data_size": 65536 00:13:49.292 }, 00:13:49.292 { 00:13:49.292 "name": "BaseBdev2", 00:13:49.292 "uuid": "da2f52b3-6f73-40d7-b118-ab1b8459b70f", 00:13:49.292 "is_configured": true, 00:13:49.292 "data_offset": 0, 00:13:49.292 "data_size": 65536 00:13:49.292 }, 00:13:49.292 { 00:13:49.292 "name": "BaseBdev3", 00:13:49.292 "uuid": "fb7afc8f-b8dc-4582-aa62-4d51391b9e00", 00:13:49.292 "is_configured": true, 00:13:49.292 "data_offset": 0, 00:13:49.292 "data_size": 65536 00:13:49.292 } 00:13:49.292 ] 00:13:49.292 }' 00:13:49.292 12:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.292 12:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.861 12:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:49.861 12:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:49.861 12:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:49.861 12:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:49.861 12:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:49.861 12:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:49.861 12:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:49.861 12:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.861 12:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.861 12:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:49.861 [2024-11-25 12:12:45.770931] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:49.861 12:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.861 12:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:49.861 "name": "Existed_Raid", 00:13:49.861 "aliases": [ 00:13:49.861 "8316cb22-8fe7-4dc2-bbbd-1754a4e0d323" 00:13:49.861 ], 00:13:49.861 "product_name": "Raid Volume", 00:13:49.861 "block_size": 512, 00:13:49.861 "num_blocks": 196608, 00:13:49.861 "uuid": "8316cb22-8fe7-4dc2-bbbd-1754a4e0d323", 00:13:49.861 "assigned_rate_limits": { 00:13:49.861 "rw_ios_per_sec": 0, 00:13:49.861 "rw_mbytes_per_sec": 0, 00:13:49.861 "r_mbytes_per_sec": 0, 00:13:49.861 "w_mbytes_per_sec": 0 00:13:49.861 }, 00:13:49.861 "claimed": false, 00:13:49.861 "zoned": false, 00:13:49.861 "supported_io_types": { 00:13:49.861 "read": true, 00:13:49.861 "write": true, 00:13:49.861 "unmap": true, 00:13:49.861 "flush": true, 00:13:49.861 "reset": true, 00:13:49.861 "nvme_admin": false, 00:13:49.861 "nvme_io": false, 00:13:49.861 "nvme_io_md": false, 00:13:49.861 "write_zeroes": true, 00:13:49.861 "zcopy": false, 00:13:49.861 "get_zone_info": false, 00:13:49.861 "zone_management": false, 00:13:49.861 "zone_append": false, 00:13:49.861 "compare": false, 00:13:49.861 "compare_and_write": false, 00:13:49.861 "abort": false, 00:13:49.861 "seek_hole": false, 00:13:49.861 "seek_data": false, 00:13:49.861 "copy": false, 00:13:49.861 "nvme_iov_md": false 00:13:49.861 }, 00:13:49.861 "memory_domains": [ 00:13:49.861 { 00:13:49.861 "dma_device_id": "system", 00:13:49.861 "dma_device_type": 1 00:13:49.861 }, 00:13:49.861 { 00:13:49.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.861 "dma_device_type": 2 00:13:49.861 }, 00:13:49.861 { 00:13:49.861 "dma_device_id": "system", 00:13:49.861 "dma_device_type": 1 00:13:49.861 }, 00:13:49.861 { 00:13:49.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.861 "dma_device_type": 2 00:13:49.861 }, 00:13:49.861 { 00:13:49.861 "dma_device_id": "system", 00:13:49.861 "dma_device_type": 1 00:13:49.861 }, 00:13:49.861 { 00:13:49.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.861 "dma_device_type": 2 00:13:49.861 } 00:13:49.861 ], 00:13:49.861 "driver_specific": { 00:13:49.861 "raid": { 00:13:49.861 "uuid": "8316cb22-8fe7-4dc2-bbbd-1754a4e0d323", 00:13:49.861 "strip_size_kb": 64, 00:13:49.861 "state": "online", 00:13:49.861 "raid_level": "concat", 00:13:49.861 "superblock": false, 00:13:49.861 "num_base_bdevs": 3, 00:13:49.861 "num_base_bdevs_discovered": 3, 00:13:49.861 "num_base_bdevs_operational": 3, 00:13:49.861 "base_bdevs_list": [ 00:13:49.861 { 00:13:49.861 "name": "NewBaseBdev", 00:13:49.861 "uuid": "c3fe1f30-c39c-4a71-b9c3-a6e2d86d5adc", 00:13:49.861 "is_configured": true, 00:13:49.861 "data_offset": 0, 00:13:49.861 "data_size": 65536 00:13:49.861 }, 00:13:49.861 { 00:13:49.861 "name": "BaseBdev2", 00:13:49.861 "uuid": "da2f52b3-6f73-40d7-b118-ab1b8459b70f", 00:13:49.861 "is_configured": true, 00:13:49.861 "data_offset": 0, 00:13:49.861 "data_size": 65536 00:13:49.861 }, 00:13:49.861 { 00:13:49.861 "name": "BaseBdev3", 00:13:49.861 "uuid": "fb7afc8f-b8dc-4582-aa62-4d51391b9e00", 00:13:49.861 "is_configured": true, 00:13:49.861 "data_offset": 0, 00:13:49.861 "data_size": 65536 00:13:49.861 } 00:13:49.861 ] 00:13:49.861 } 00:13:49.861 } 00:13:49.861 }' 00:13:49.861 12:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:49.861 12:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:49.861 BaseBdev2 00:13:49.861 BaseBdev3' 00:13:49.861 12:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:49.861 12:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:49.861 12:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:49.861 12:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:49.861 12:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:49.861 12:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.861 12:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.861 12:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.120 12:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:50.120 12:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:50.120 12:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:50.120 12:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:50.120 12:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:50.120 12:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.120 12:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.120 12:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.120 12:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:50.120 12:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:50.120 12:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:50.120 12:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:50.120 12:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:50.120 12:12:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.120 12:12:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.120 12:12:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.120 12:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:50.120 12:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:50.120 12:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:50.121 12:12:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.121 12:12:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.121 [2024-11-25 12:12:46.082632] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:50.121 [2024-11-25 12:12:46.082792] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:50.121 [2024-11-25 12:12:46.082928] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:50.121 [2024-11-25 12:12:46.083005] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:50.121 [2024-11-25 12:12:46.083026] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:50.121 12:12:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.121 12:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65648 00:13:50.121 12:12:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65648 ']' 00:13:50.121 12:12:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65648 00:13:50.121 12:12:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:13:50.121 12:12:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:50.121 12:12:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65648 00:13:50.121 killing process with pid 65648 00:13:50.121 12:12:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:50.121 12:12:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:50.121 12:12:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65648' 00:13:50.121 12:12:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65648 00:13:50.121 [2024-11-25 12:12:46.122719] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:50.121 12:12:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65648 00:13:50.379 [2024-11-25 12:12:46.391249] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:51.757 ************************************ 00:13:51.757 END TEST raid_state_function_test 00:13:51.757 ************************************ 00:13:51.757 12:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:51.757 00:13:51.757 real 0m11.693s 00:13:51.757 user 0m19.411s 00:13:51.757 sys 0m1.569s 00:13:51.757 12:12:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:51.757 12:12:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.757 12:12:47 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:13:51.757 12:12:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:51.757 12:12:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:51.757 12:12:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:51.757 ************************************ 00:13:51.757 START TEST raid_state_function_test_sb 00:13:51.757 ************************************ 00:13:51.757 12:12:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:13:51.757 12:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:13:51.757 12:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:13:51.757 12:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:51.757 12:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:51.757 12:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:51.757 12:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:51.757 12:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:51.757 12:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:51.757 12:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:51.757 12:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:51.757 12:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:51.757 12:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:51.757 12:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:51.757 12:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:51.757 12:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:51.757 Process raid pid: 66279 00:13:51.757 12:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:51.757 12:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:51.757 12:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:51.757 12:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:51.757 12:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:51.757 12:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:51.757 12:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:13:51.757 12:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:51.757 12:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:51.757 12:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:51.757 12:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:51.757 12:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66279 00:13:51.757 12:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66279' 00:13:51.757 12:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66279 00:13:51.757 12:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:51.757 12:12:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66279 ']' 00:13:51.757 12:12:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:51.757 12:12:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:51.757 12:12:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.757 12:12:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:51.757 12:12:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.757 [2024-11-25 12:12:47.587034] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:13:51.757 [2024-11-25 12:12:47.587214] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:51.757 [2024-11-25 12:12:47.775559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.016 [2024-11-25 12:12:47.930166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.275 [2024-11-25 12:12:48.152235] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:52.275 [2024-11-25 12:12:48.152288] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:52.634 12:12:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:52.634 12:12:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:52.634 12:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:52.634 12:12:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.634 12:12:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.634 [2024-11-25 12:12:48.586434] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:52.634 [2024-11-25 12:12:48.586497] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:52.634 [2024-11-25 12:12:48.586514] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:52.634 [2024-11-25 12:12:48.586531] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:52.634 [2024-11-25 12:12:48.586541] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:52.634 [2024-11-25 12:12:48.586556] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:52.634 12:12:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.634 12:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:52.634 12:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:52.634 12:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:52.634 12:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:52.634 12:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:52.634 12:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:52.634 12:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.634 12:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.634 12:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.634 12:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.634 12:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.634 12:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:52.634 12:12:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.634 12:12:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.634 12:12:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.634 12:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.634 "name": "Existed_Raid", 00:13:52.634 "uuid": "c928cd3e-2ba1-4454-b619-dab3d87911ba", 00:13:52.634 "strip_size_kb": 64, 00:13:52.634 "state": "configuring", 00:13:52.634 "raid_level": "concat", 00:13:52.634 "superblock": true, 00:13:52.634 "num_base_bdevs": 3, 00:13:52.634 "num_base_bdevs_discovered": 0, 00:13:52.634 "num_base_bdevs_operational": 3, 00:13:52.634 "base_bdevs_list": [ 00:13:52.634 { 00:13:52.634 "name": "BaseBdev1", 00:13:52.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.634 "is_configured": false, 00:13:52.634 "data_offset": 0, 00:13:52.634 "data_size": 0 00:13:52.634 }, 00:13:52.634 { 00:13:52.635 "name": "BaseBdev2", 00:13:52.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.635 "is_configured": false, 00:13:52.635 "data_offset": 0, 00:13:52.635 "data_size": 0 00:13:52.635 }, 00:13:52.635 { 00:13:52.635 "name": "BaseBdev3", 00:13:52.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.635 "is_configured": false, 00:13:52.635 "data_offset": 0, 00:13:52.635 "data_size": 0 00:13:52.635 } 00:13:52.635 ] 00:13:52.635 }' 00:13:52.635 12:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.635 12:12:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.209 12:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:53.210 12:12:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.210 12:12:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.210 [2024-11-25 12:12:49.122502] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:53.210 [2024-11-25 12:12:49.122682] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:53.210 12:12:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.210 12:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:53.210 12:12:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.210 12:12:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.210 [2024-11-25 12:12:49.130497] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:53.210 [2024-11-25 12:12:49.130554] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:53.210 [2024-11-25 12:12:49.130570] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:53.210 [2024-11-25 12:12:49.130586] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:53.210 [2024-11-25 12:12:49.130596] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:53.210 [2024-11-25 12:12:49.130611] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:53.210 12:12:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.210 12:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:53.210 12:12:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.210 12:12:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.210 [2024-11-25 12:12:49.174869] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:53.210 BaseBdev1 00:13:53.210 12:12:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.210 12:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:53.210 12:12:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:53.210 12:12:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:53.210 12:12:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:53.210 12:12:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:53.210 12:12:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:53.210 12:12:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:53.210 12:12:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.210 12:12:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.210 12:12:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.210 12:12:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:53.210 12:12:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.210 12:12:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.210 [ 00:13:53.210 { 00:13:53.210 "name": "BaseBdev1", 00:13:53.210 "aliases": [ 00:13:53.210 "611af749-276c-4c76-b756-a198bc04e19e" 00:13:53.210 ], 00:13:53.210 "product_name": "Malloc disk", 00:13:53.210 "block_size": 512, 00:13:53.210 "num_blocks": 65536, 00:13:53.210 "uuid": "611af749-276c-4c76-b756-a198bc04e19e", 00:13:53.210 "assigned_rate_limits": { 00:13:53.210 "rw_ios_per_sec": 0, 00:13:53.210 "rw_mbytes_per_sec": 0, 00:13:53.210 "r_mbytes_per_sec": 0, 00:13:53.210 "w_mbytes_per_sec": 0 00:13:53.210 }, 00:13:53.210 "claimed": true, 00:13:53.210 "claim_type": "exclusive_write", 00:13:53.210 "zoned": false, 00:13:53.210 "supported_io_types": { 00:13:53.210 "read": true, 00:13:53.210 "write": true, 00:13:53.210 "unmap": true, 00:13:53.210 "flush": true, 00:13:53.210 "reset": true, 00:13:53.210 "nvme_admin": false, 00:13:53.210 "nvme_io": false, 00:13:53.210 "nvme_io_md": false, 00:13:53.210 "write_zeroes": true, 00:13:53.210 "zcopy": true, 00:13:53.210 "get_zone_info": false, 00:13:53.210 "zone_management": false, 00:13:53.210 "zone_append": false, 00:13:53.210 "compare": false, 00:13:53.210 "compare_and_write": false, 00:13:53.210 "abort": true, 00:13:53.210 "seek_hole": false, 00:13:53.210 "seek_data": false, 00:13:53.210 "copy": true, 00:13:53.210 "nvme_iov_md": false 00:13:53.210 }, 00:13:53.210 "memory_domains": [ 00:13:53.210 { 00:13:53.210 "dma_device_id": "system", 00:13:53.210 "dma_device_type": 1 00:13:53.210 }, 00:13:53.210 { 00:13:53.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.210 "dma_device_type": 2 00:13:53.210 } 00:13:53.210 ], 00:13:53.210 "driver_specific": {} 00:13:53.210 } 00:13:53.210 ] 00:13:53.210 12:12:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.210 12:12:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:53.210 12:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:53.210 12:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:53.210 12:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:53.210 12:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:53.210 12:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:53.210 12:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:53.210 12:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.210 12:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.210 12:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.210 12:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.210 12:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.210 12:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:53.210 12:12:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.210 12:12:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.210 12:12:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.210 12:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.210 "name": "Existed_Raid", 00:13:53.210 "uuid": "b9ec2c8b-d15e-41a5-bb8c-13aad0c907ac", 00:13:53.210 "strip_size_kb": 64, 00:13:53.210 "state": "configuring", 00:13:53.210 "raid_level": "concat", 00:13:53.210 "superblock": true, 00:13:53.210 "num_base_bdevs": 3, 00:13:53.210 "num_base_bdevs_discovered": 1, 00:13:53.210 "num_base_bdevs_operational": 3, 00:13:53.210 "base_bdevs_list": [ 00:13:53.210 { 00:13:53.210 "name": "BaseBdev1", 00:13:53.210 "uuid": "611af749-276c-4c76-b756-a198bc04e19e", 00:13:53.210 "is_configured": true, 00:13:53.210 "data_offset": 2048, 00:13:53.210 "data_size": 63488 00:13:53.210 }, 00:13:53.210 { 00:13:53.210 "name": "BaseBdev2", 00:13:53.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.210 "is_configured": false, 00:13:53.210 "data_offset": 0, 00:13:53.211 "data_size": 0 00:13:53.211 }, 00:13:53.211 { 00:13:53.211 "name": "BaseBdev3", 00:13:53.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.211 "is_configured": false, 00:13:53.211 "data_offset": 0, 00:13:53.211 "data_size": 0 00:13:53.211 } 00:13:53.211 ] 00:13:53.211 }' 00:13:53.211 12:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.211 12:12:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.779 12:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:53.779 12:12:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.779 12:12:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.779 [2024-11-25 12:12:49.719132] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:53.779 [2024-11-25 12:12:49.719205] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:53.779 12:12:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.779 12:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:53.779 12:12:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.779 12:12:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.779 [2024-11-25 12:12:49.727196] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:53.779 [2024-11-25 12:12:49.730167] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:53.779 [2024-11-25 12:12:49.730233] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:53.779 [2024-11-25 12:12:49.730252] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:53.779 [2024-11-25 12:12:49.730271] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:53.779 12:12:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.779 12:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:53.779 12:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:53.779 12:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:53.779 12:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:53.779 12:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:53.779 12:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:53.779 12:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:53.779 12:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:53.779 12:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.779 12:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.779 12:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.779 12:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.779 12:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.779 12:12:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.779 12:12:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.779 12:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:53.779 12:12:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.779 12:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.779 "name": "Existed_Raid", 00:13:53.779 "uuid": "a0ac101e-1213-490a-bc43-265a48a00cab", 00:13:53.779 "strip_size_kb": 64, 00:13:53.779 "state": "configuring", 00:13:53.779 "raid_level": "concat", 00:13:53.779 "superblock": true, 00:13:53.779 "num_base_bdevs": 3, 00:13:53.779 "num_base_bdevs_discovered": 1, 00:13:53.779 "num_base_bdevs_operational": 3, 00:13:53.779 "base_bdevs_list": [ 00:13:53.779 { 00:13:53.779 "name": "BaseBdev1", 00:13:53.779 "uuid": "611af749-276c-4c76-b756-a198bc04e19e", 00:13:53.779 "is_configured": true, 00:13:53.779 "data_offset": 2048, 00:13:53.779 "data_size": 63488 00:13:53.779 }, 00:13:53.779 { 00:13:53.779 "name": "BaseBdev2", 00:13:53.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.779 "is_configured": false, 00:13:53.779 "data_offset": 0, 00:13:53.779 "data_size": 0 00:13:53.779 }, 00:13:53.779 { 00:13:53.779 "name": "BaseBdev3", 00:13:53.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.779 "is_configured": false, 00:13:53.779 "data_offset": 0, 00:13:53.779 "data_size": 0 00:13:53.779 } 00:13:53.779 ] 00:13:53.779 }' 00:13:53.779 12:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.779 12:12:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.349 12:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:54.349 12:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.349 12:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.349 [2024-11-25 12:12:50.295404] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:54.349 BaseBdev2 00:13:54.349 12:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.349 12:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:54.349 12:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:54.349 12:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:54.349 12:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:54.349 12:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:54.349 12:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:54.349 12:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:54.349 12:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.349 12:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.349 12:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.349 12:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:54.349 12:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.349 12:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.349 [ 00:13:54.349 { 00:13:54.349 "name": "BaseBdev2", 00:13:54.349 "aliases": [ 00:13:54.349 "6024f3a2-22c5-4556-b7fb-940c12ad0468" 00:13:54.349 ], 00:13:54.349 "product_name": "Malloc disk", 00:13:54.349 "block_size": 512, 00:13:54.349 "num_blocks": 65536, 00:13:54.349 "uuid": "6024f3a2-22c5-4556-b7fb-940c12ad0468", 00:13:54.349 "assigned_rate_limits": { 00:13:54.349 "rw_ios_per_sec": 0, 00:13:54.349 "rw_mbytes_per_sec": 0, 00:13:54.349 "r_mbytes_per_sec": 0, 00:13:54.349 "w_mbytes_per_sec": 0 00:13:54.349 }, 00:13:54.349 "claimed": true, 00:13:54.349 "claim_type": "exclusive_write", 00:13:54.349 "zoned": false, 00:13:54.349 "supported_io_types": { 00:13:54.349 "read": true, 00:13:54.349 "write": true, 00:13:54.349 "unmap": true, 00:13:54.349 "flush": true, 00:13:54.349 "reset": true, 00:13:54.349 "nvme_admin": false, 00:13:54.349 "nvme_io": false, 00:13:54.349 "nvme_io_md": false, 00:13:54.349 "write_zeroes": true, 00:13:54.349 "zcopy": true, 00:13:54.349 "get_zone_info": false, 00:13:54.349 "zone_management": false, 00:13:54.349 "zone_append": false, 00:13:54.349 "compare": false, 00:13:54.349 "compare_and_write": false, 00:13:54.349 "abort": true, 00:13:54.349 "seek_hole": false, 00:13:54.349 "seek_data": false, 00:13:54.349 "copy": true, 00:13:54.349 "nvme_iov_md": false 00:13:54.349 }, 00:13:54.349 "memory_domains": [ 00:13:54.349 { 00:13:54.349 "dma_device_id": "system", 00:13:54.349 "dma_device_type": 1 00:13:54.349 }, 00:13:54.349 { 00:13:54.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:54.349 "dma_device_type": 2 00:13:54.349 } 00:13:54.349 ], 00:13:54.349 "driver_specific": {} 00:13:54.349 } 00:13:54.349 ] 00:13:54.349 12:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.349 12:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:54.349 12:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:54.349 12:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:54.349 12:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:54.349 12:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:54.349 12:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:54.349 12:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:54.349 12:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:54.349 12:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:54.349 12:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.349 12:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.349 12:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.349 12:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.349 12:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.349 12:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:54.349 12:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.349 12:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.349 12:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.349 12:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.349 "name": "Existed_Raid", 00:13:54.349 "uuid": "a0ac101e-1213-490a-bc43-265a48a00cab", 00:13:54.349 "strip_size_kb": 64, 00:13:54.349 "state": "configuring", 00:13:54.349 "raid_level": "concat", 00:13:54.349 "superblock": true, 00:13:54.349 "num_base_bdevs": 3, 00:13:54.349 "num_base_bdevs_discovered": 2, 00:13:54.349 "num_base_bdevs_operational": 3, 00:13:54.349 "base_bdevs_list": [ 00:13:54.349 { 00:13:54.349 "name": "BaseBdev1", 00:13:54.349 "uuid": "611af749-276c-4c76-b756-a198bc04e19e", 00:13:54.349 "is_configured": true, 00:13:54.349 "data_offset": 2048, 00:13:54.349 "data_size": 63488 00:13:54.349 }, 00:13:54.349 { 00:13:54.349 "name": "BaseBdev2", 00:13:54.349 "uuid": "6024f3a2-22c5-4556-b7fb-940c12ad0468", 00:13:54.349 "is_configured": true, 00:13:54.349 "data_offset": 2048, 00:13:54.349 "data_size": 63488 00:13:54.349 }, 00:13:54.349 { 00:13:54.349 "name": "BaseBdev3", 00:13:54.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.349 "is_configured": false, 00:13:54.349 "data_offset": 0, 00:13:54.349 "data_size": 0 00:13:54.349 } 00:13:54.349 ] 00:13:54.349 }' 00:13:54.350 12:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.350 12:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.918 12:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:54.918 12:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.918 12:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.918 [2024-11-25 12:12:50.862966] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:54.918 [2024-11-25 12:12:50.863273] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:54.918 [2024-11-25 12:12:50.863306] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:54.918 BaseBdev3 00:13:54.918 [2024-11-25 12:12:50.863672] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:54.918 [2024-11-25 12:12:50.863931] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:54.918 [2024-11-25 12:12:50.863949] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:54.918 [2024-11-25 12:12:50.864127] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:54.918 12:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.918 12:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:54.918 12:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:54.918 12:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:54.918 12:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:54.918 12:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:54.918 12:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:54.918 12:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:54.918 12:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.918 12:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.918 12:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.918 12:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:54.918 12:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.918 12:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.918 [ 00:13:54.918 { 00:13:54.918 "name": "BaseBdev3", 00:13:54.918 "aliases": [ 00:13:54.918 "f6d46da2-46c4-4bbd-96b8-a96b70ada7e3" 00:13:54.918 ], 00:13:54.918 "product_name": "Malloc disk", 00:13:54.918 "block_size": 512, 00:13:54.918 "num_blocks": 65536, 00:13:54.918 "uuid": "f6d46da2-46c4-4bbd-96b8-a96b70ada7e3", 00:13:54.918 "assigned_rate_limits": { 00:13:54.918 "rw_ios_per_sec": 0, 00:13:54.918 "rw_mbytes_per_sec": 0, 00:13:54.918 "r_mbytes_per_sec": 0, 00:13:54.918 "w_mbytes_per_sec": 0 00:13:54.918 }, 00:13:54.918 "claimed": true, 00:13:54.918 "claim_type": "exclusive_write", 00:13:54.918 "zoned": false, 00:13:54.918 "supported_io_types": { 00:13:54.918 "read": true, 00:13:54.918 "write": true, 00:13:54.918 "unmap": true, 00:13:54.918 "flush": true, 00:13:54.918 "reset": true, 00:13:54.918 "nvme_admin": false, 00:13:54.918 "nvme_io": false, 00:13:54.918 "nvme_io_md": false, 00:13:54.918 "write_zeroes": true, 00:13:54.918 "zcopy": true, 00:13:54.918 "get_zone_info": false, 00:13:54.918 "zone_management": false, 00:13:54.918 "zone_append": false, 00:13:54.918 "compare": false, 00:13:54.918 "compare_and_write": false, 00:13:54.918 "abort": true, 00:13:54.918 "seek_hole": false, 00:13:54.918 "seek_data": false, 00:13:54.918 "copy": true, 00:13:54.918 "nvme_iov_md": false 00:13:54.918 }, 00:13:54.918 "memory_domains": [ 00:13:54.918 { 00:13:54.918 "dma_device_id": "system", 00:13:54.918 "dma_device_type": 1 00:13:54.918 }, 00:13:54.918 { 00:13:54.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:54.918 "dma_device_type": 2 00:13:54.918 } 00:13:54.918 ], 00:13:54.918 "driver_specific": {} 00:13:54.918 } 00:13:54.918 ] 00:13:54.918 12:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.918 12:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:54.918 12:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:54.918 12:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:54.918 12:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:13:54.918 12:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:54.918 12:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:54.918 12:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:54.918 12:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:54.918 12:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:54.918 12:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.918 12:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.918 12:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.918 12:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.918 12:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.918 12:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:54.918 12:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.918 12:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.918 12:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.918 12:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.918 "name": "Existed_Raid", 00:13:54.918 "uuid": "a0ac101e-1213-490a-bc43-265a48a00cab", 00:13:54.918 "strip_size_kb": 64, 00:13:54.918 "state": "online", 00:13:54.918 "raid_level": "concat", 00:13:54.918 "superblock": true, 00:13:54.918 "num_base_bdevs": 3, 00:13:54.918 "num_base_bdevs_discovered": 3, 00:13:54.918 "num_base_bdevs_operational": 3, 00:13:54.918 "base_bdevs_list": [ 00:13:54.918 { 00:13:54.919 "name": "BaseBdev1", 00:13:54.919 "uuid": "611af749-276c-4c76-b756-a198bc04e19e", 00:13:54.919 "is_configured": true, 00:13:54.919 "data_offset": 2048, 00:13:54.919 "data_size": 63488 00:13:54.919 }, 00:13:54.919 { 00:13:54.919 "name": "BaseBdev2", 00:13:54.919 "uuid": "6024f3a2-22c5-4556-b7fb-940c12ad0468", 00:13:54.919 "is_configured": true, 00:13:54.919 "data_offset": 2048, 00:13:54.919 "data_size": 63488 00:13:54.919 }, 00:13:54.919 { 00:13:54.919 "name": "BaseBdev3", 00:13:54.919 "uuid": "f6d46da2-46c4-4bbd-96b8-a96b70ada7e3", 00:13:54.919 "is_configured": true, 00:13:54.919 "data_offset": 2048, 00:13:54.919 "data_size": 63488 00:13:54.919 } 00:13:54.919 ] 00:13:54.919 }' 00:13:54.919 12:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.919 12:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.487 12:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:55.487 12:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:55.487 12:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:55.487 12:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:55.487 12:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:55.487 12:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:55.487 12:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:55.487 12:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:55.487 12:12:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.487 12:12:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.487 [2024-11-25 12:12:51.435572] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:55.487 12:12:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.487 12:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:55.487 "name": "Existed_Raid", 00:13:55.487 "aliases": [ 00:13:55.487 "a0ac101e-1213-490a-bc43-265a48a00cab" 00:13:55.487 ], 00:13:55.487 "product_name": "Raid Volume", 00:13:55.487 "block_size": 512, 00:13:55.487 "num_blocks": 190464, 00:13:55.487 "uuid": "a0ac101e-1213-490a-bc43-265a48a00cab", 00:13:55.487 "assigned_rate_limits": { 00:13:55.487 "rw_ios_per_sec": 0, 00:13:55.487 "rw_mbytes_per_sec": 0, 00:13:55.487 "r_mbytes_per_sec": 0, 00:13:55.487 "w_mbytes_per_sec": 0 00:13:55.487 }, 00:13:55.487 "claimed": false, 00:13:55.487 "zoned": false, 00:13:55.487 "supported_io_types": { 00:13:55.487 "read": true, 00:13:55.487 "write": true, 00:13:55.487 "unmap": true, 00:13:55.487 "flush": true, 00:13:55.487 "reset": true, 00:13:55.487 "nvme_admin": false, 00:13:55.487 "nvme_io": false, 00:13:55.487 "nvme_io_md": false, 00:13:55.487 "write_zeroes": true, 00:13:55.487 "zcopy": false, 00:13:55.487 "get_zone_info": false, 00:13:55.487 "zone_management": false, 00:13:55.487 "zone_append": false, 00:13:55.487 "compare": false, 00:13:55.487 "compare_and_write": false, 00:13:55.487 "abort": false, 00:13:55.487 "seek_hole": false, 00:13:55.487 "seek_data": false, 00:13:55.487 "copy": false, 00:13:55.487 "nvme_iov_md": false 00:13:55.487 }, 00:13:55.487 "memory_domains": [ 00:13:55.488 { 00:13:55.488 "dma_device_id": "system", 00:13:55.488 "dma_device_type": 1 00:13:55.488 }, 00:13:55.488 { 00:13:55.488 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.488 "dma_device_type": 2 00:13:55.488 }, 00:13:55.488 { 00:13:55.488 "dma_device_id": "system", 00:13:55.488 "dma_device_type": 1 00:13:55.488 }, 00:13:55.488 { 00:13:55.488 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.488 "dma_device_type": 2 00:13:55.488 }, 00:13:55.488 { 00:13:55.488 "dma_device_id": "system", 00:13:55.488 "dma_device_type": 1 00:13:55.488 }, 00:13:55.488 { 00:13:55.488 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.488 "dma_device_type": 2 00:13:55.488 } 00:13:55.488 ], 00:13:55.488 "driver_specific": { 00:13:55.488 "raid": { 00:13:55.488 "uuid": "a0ac101e-1213-490a-bc43-265a48a00cab", 00:13:55.488 "strip_size_kb": 64, 00:13:55.488 "state": "online", 00:13:55.488 "raid_level": "concat", 00:13:55.488 "superblock": true, 00:13:55.488 "num_base_bdevs": 3, 00:13:55.488 "num_base_bdevs_discovered": 3, 00:13:55.488 "num_base_bdevs_operational": 3, 00:13:55.488 "base_bdevs_list": [ 00:13:55.488 { 00:13:55.488 "name": "BaseBdev1", 00:13:55.488 "uuid": "611af749-276c-4c76-b756-a198bc04e19e", 00:13:55.488 "is_configured": true, 00:13:55.488 "data_offset": 2048, 00:13:55.488 "data_size": 63488 00:13:55.488 }, 00:13:55.488 { 00:13:55.488 "name": "BaseBdev2", 00:13:55.488 "uuid": "6024f3a2-22c5-4556-b7fb-940c12ad0468", 00:13:55.488 "is_configured": true, 00:13:55.488 "data_offset": 2048, 00:13:55.488 "data_size": 63488 00:13:55.488 }, 00:13:55.488 { 00:13:55.488 "name": "BaseBdev3", 00:13:55.488 "uuid": "f6d46da2-46c4-4bbd-96b8-a96b70ada7e3", 00:13:55.488 "is_configured": true, 00:13:55.488 "data_offset": 2048, 00:13:55.488 "data_size": 63488 00:13:55.488 } 00:13:55.488 ] 00:13:55.488 } 00:13:55.488 } 00:13:55.488 }' 00:13:55.488 12:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:55.488 12:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:55.488 BaseBdev2 00:13:55.488 BaseBdev3' 00:13:55.488 12:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:55.488 12:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:55.488 12:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:55.488 12:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:55.488 12:12:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.488 12:12:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.746 12:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:55.746 12:12:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.746 12:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:55.746 12:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:55.746 12:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:55.746 12:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:55.746 12:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:55.746 12:12:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.746 12:12:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.746 12:12:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.746 12:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:55.746 12:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:55.746 12:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:55.746 12:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:55.746 12:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:55.746 12:12:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.746 12:12:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.746 12:12:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.746 12:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:55.746 12:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:55.746 12:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:55.746 12:12:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.746 12:12:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.746 [2024-11-25 12:12:51.755333] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:55.746 [2024-11-25 12:12:51.755380] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:55.746 [2024-11-25 12:12:51.755451] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:56.006 12:12:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.006 12:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:56.006 12:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:13:56.006 12:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:56.006 12:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:13:56.006 12:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:13:56.006 12:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:13:56.006 12:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:56.006 12:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:13:56.006 12:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:56.006 12:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:56.006 12:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:56.006 12:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.006 12:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.006 12:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.006 12:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.006 12:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.006 12:12:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.006 12:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:56.006 12:12:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.006 12:12:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.006 12:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.006 "name": "Existed_Raid", 00:13:56.006 "uuid": "a0ac101e-1213-490a-bc43-265a48a00cab", 00:13:56.006 "strip_size_kb": 64, 00:13:56.006 "state": "offline", 00:13:56.006 "raid_level": "concat", 00:13:56.006 "superblock": true, 00:13:56.006 "num_base_bdevs": 3, 00:13:56.006 "num_base_bdevs_discovered": 2, 00:13:56.006 "num_base_bdevs_operational": 2, 00:13:56.006 "base_bdevs_list": [ 00:13:56.006 { 00:13:56.006 "name": null, 00:13:56.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.006 "is_configured": false, 00:13:56.006 "data_offset": 0, 00:13:56.006 "data_size": 63488 00:13:56.006 }, 00:13:56.006 { 00:13:56.006 "name": "BaseBdev2", 00:13:56.006 "uuid": "6024f3a2-22c5-4556-b7fb-940c12ad0468", 00:13:56.006 "is_configured": true, 00:13:56.006 "data_offset": 2048, 00:13:56.006 "data_size": 63488 00:13:56.006 }, 00:13:56.006 { 00:13:56.006 "name": "BaseBdev3", 00:13:56.006 "uuid": "f6d46da2-46c4-4bbd-96b8-a96b70ada7e3", 00:13:56.006 "is_configured": true, 00:13:56.006 "data_offset": 2048, 00:13:56.006 "data_size": 63488 00:13:56.006 } 00:13:56.006 ] 00:13:56.006 }' 00:13:56.006 12:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.006 12:12:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.265 12:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:56.265 12:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:56.265 12:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:56.265 12:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.265 12:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.265 12:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.525 12:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.525 12:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:56.525 12:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:56.525 12:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:56.525 12:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.525 12:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.525 [2024-11-25 12:12:52.399557] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:56.525 12:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.525 12:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:56.525 12:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:56.525 12:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.525 12:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.525 12:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.525 12:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:56.525 12:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.525 12:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:56.525 12:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:56.525 12:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:56.525 12:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.525 12:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.525 [2024-11-25 12:12:52.543406] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:56.525 [2024-11-25 12:12:52.543470] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:56.785 12:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.785 12:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:56.785 12:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:56.785 12:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.785 12:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.785 12:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:56.785 12:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.785 12:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.785 12:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:56.785 12:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:56.785 12:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:13:56.785 12:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:56.785 12:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:56.785 12:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:56.785 12:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.785 12:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.785 BaseBdev2 00:13:56.785 12:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.785 12:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:56.785 12:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:56.785 12:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:56.785 12:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:56.785 12:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:56.785 12:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:56.785 12:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:56.785 12:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.785 12:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.785 12:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.785 12:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:56.785 12:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.785 12:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.785 [ 00:13:56.785 { 00:13:56.785 "name": "BaseBdev2", 00:13:56.785 "aliases": [ 00:13:56.785 "50502567-695a-4e7c-9593-cb362fd99a5d" 00:13:56.785 ], 00:13:56.785 "product_name": "Malloc disk", 00:13:56.785 "block_size": 512, 00:13:56.785 "num_blocks": 65536, 00:13:56.785 "uuid": "50502567-695a-4e7c-9593-cb362fd99a5d", 00:13:56.785 "assigned_rate_limits": { 00:13:56.785 "rw_ios_per_sec": 0, 00:13:56.785 "rw_mbytes_per_sec": 0, 00:13:56.785 "r_mbytes_per_sec": 0, 00:13:56.785 "w_mbytes_per_sec": 0 00:13:56.785 }, 00:13:56.785 "claimed": false, 00:13:56.785 "zoned": false, 00:13:56.785 "supported_io_types": { 00:13:56.785 "read": true, 00:13:56.785 "write": true, 00:13:56.785 "unmap": true, 00:13:56.785 "flush": true, 00:13:56.785 "reset": true, 00:13:56.785 "nvme_admin": false, 00:13:56.785 "nvme_io": false, 00:13:56.785 "nvme_io_md": false, 00:13:56.785 "write_zeroes": true, 00:13:56.785 "zcopy": true, 00:13:56.785 "get_zone_info": false, 00:13:56.785 "zone_management": false, 00:13:56.785 "zone_append": false, 00:13:56.785 "compare": false, 00:13:56.785 "compare_and_write": false, 00:13:56.785 "abort": true, 00:13:56.785 "seek_hole": false, 00:13:56.785 "seek_data": false, 00:13:56.785 "copy": true, 00:13:56.785 "nvme_iov_md": false 00:13:56.785 }, 00:13:56.785 "memory_domains": [ 00:13:56.785 { 00:13:56.785 "dma_device_id": "system", 00:13:56.785 "dma_device_type": 1 00:13:56.785 }, 00:13:56.785 { 00:13:56.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:56.785 "dma_device_type": 2 00:13:56.785 } 00:13:56.785 ], 00:13:56.785 "driver_specific": {} 00:13:56.785 } 00:13:56.785 ] 00:13:56.785 12:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.785 12:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:56.785 12:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:56.785 12:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:56.785 12:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:56.785 12:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.785 12:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.785 BaseBdev3 00:13:56.785 12:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.785 12:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:56.785 12:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:56.785 12:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:56.785 12:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:56.785 12:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:56.785 12:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:56.785 12:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:56.786 12:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.786 12:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.786 12:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.786 12:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:56.786 12:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.786 12:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.786 [ 00:13:56.786 { 00:13:56.786 "name": "BaseBdev3", 00:13:56.786 "aliases": [ 00:13:56.786 "d61c5548-db0b-4ced-83ac-800405eaaacd" 00:13:56.786 ], 00:13:56.786 "product_name": "Malloc disk", 00:13:56.786 "block_size": 512, 00:13:56.786 "num_blocks": 65536, 00:13:56.786 "uuid": "d61c5548-db0b-4ced-83ac-800405eaaacd", 00:13:56.786 "assigned_rate_limits": { 00:13:56.786 "rw_ios_per_sec": 0, 00:13:56.786 "rw_mbytes_per_sec": 0, 00:13:56.786 "r_mbytes_per_sec": 0, 00:13:56.786 "w_mbytes_per_sec": 0 00:13:56.786 }, 00:13:56.786 "claimed": false, 00:13:56.786 "zoned": false, 00:13:56.786 "supported_io_types": { 00:13:56.786 "read": true, 00:13:56.786 "write": true, 00:13:56.786 "unmap": true, 00:13:56.786 "flush": true, 00:13:56.786 "reset": true, 00:13:56.786 "nvme_admin": false, 00:13:56.786 "nvme_io": false, 00:13:56.786 "nvme_io_md": false, 00:13:56.786 "write_zeroes": true, 00:13:56.786 "zcopy": true, 00:13:56.786 "get_zone_info": false, 00:13:56.786 "zone_management": false, 00:13:56.786 "zone_append": false, 00:13:56.786 "compare": false, 00:13:56.786 "compare_and_write": false, 00:13:56.786 "abort": true, 00:13:56.786 "seek_hole": false, 00:13:56.786 "seek_data": false, 00:13:56.786 "copy": true, 00:13:56.786 "nvme_iov_md": false 00:13:56.786 }, 00:13:56.786 "memory_domains": [ 00:13:56.786 { 00:13:56.786 "dma_device_id": "system", 00:13:56.786 "dma_device_type": 1 00:13:56.786 }, 00:13:56.786 { 00:13:56.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:56.786 "dma_device_type": 2 00:13:56.786 } 00:13:56.786 ], 00:13:56.786 "driver_specific": {} 00:13:56.786 } 00:13:56.786 ] 00:13:56.786 12:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.786 12:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:56.786 12:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:56.786 12:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:56.786 12:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:56.786 12:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.786 12:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.786 [2024-11-25 12:12:52.827526] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:56.786 [2024-11-25 12:12:52.827701] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:56.786 [2024-11-25 12:12:52.827835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:56.786 [2024-11-25 12:12:52.830330] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:56.786 12:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.786 12:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:56.786 12:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:56.786 12:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:56.786 12:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:56.786 12:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:56.786 12:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:56.786 12:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.786 12:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.786 12:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.786 12:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.786 12:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.786 12:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:56.786 12:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.786 12:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.786 12:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.045 12:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.045 "name": "Existed_Raid", 00:13:57.045 "uuid": "33fdbba9-ba83-40a8-a4b2-2369472296fc", 00:13:57.045 "strip_size_kb": 64, 00:13:57.045 "state": "configuring", 00:13:57.045 "raid_level": "concat", 00:13:57.045 "superblock": true, 00:13:57.045 "num_base_bdevs": 3, 00:13:57.045 "num_base_bdevs_discovered": 2, 00:13:57.045 "num_base_bdevs_operational": 3, 00:13:57.045 "base_bdevs_list": [ 00:13:57.045 { 00:13:57.045 "name": "BaseBdev1", 00:13:57.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.045 "is_configured": false, 00:13:57.045 "data_offset": 0, 00:13:57.045 "data_size": 0 00:13:57.045 }, 00:13:57.045 { 00:13:57.045 "name": "BaseBdev2", 00:13:57.045 "uuid": "50502567-695a-4e7c-9593-cb362fd99a5d", 00:13:57.045 "is_configured": true, 00:13:57.045 "data_offset": 2048, 00:13:57.045 "data_size": 63488 00:13:57.045 }, 00:13:57.045 { 00:13:57.045 "name": "BaseBdev3", 00:13:57.045 "uuid": "d61c5548-db0b-4ced-83ac-800405eaaacd", 00:13:57.045 "is_configured": true, 00:13:57.045 "data_offset": 2048, 00:13:57.045 "data_size": 63488 00:13:57.045 } 00:13:57.045 ] 00:13:57.045 }' 00:13:57.045 12:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.045 12:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.377 12:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:57.377 12:12:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.377 12:12:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.377 [2024-11-25 12:12:53.319670] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:57.377 12:12:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.377 12:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:57.377 12:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:57.377 12:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:57.377 12:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:57.377 12:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:57.377 12:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:57.377 12:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.377 12:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.377 12:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.377 12:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.377 12:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.377 12:12:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.377 12:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:57.377 12:12:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.377 12:12:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.377 12:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.377 "name": "Existed_Raid", 00:13:57.377 "uuid": "33fdbba9-ba83-40a8-a4b2-2369472296fc", 00:13:57.377 "strip_size_kb": 64, 00:13:57.377 "state": "configuring", 00:13:57.377 "raid_level": "concat", 00:13:57.377 "superblock": true, 00:13:57.377 "num_base_bdevs": 3, 00:13:57.377 "num_base_bdevs_discovered": 1, 00:13:57.377 "num_base_bdevs_operational": 3, 00:13:57.377 "base_bdevs_list": [ 00:13:57.377 { 00:13:57.377 "name": "BaseBdev1", 00:13:57.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.377 "is_configured": false, 00:13:57.377 "data_offset": 0, 00:13:57.377 "data_size": 0 00:13:57.377 }, 00:13:57.377 { 00:13:57.377 "name": null, 00:13:57.377 "uuid": "50502567-695a-4e7c-9593-cb362fd99a5d", 00:13:57.377 "is_configured": false, 00:13:57.377 "data_offset": 0, 00:13:57.377 "data_size": 63488 00:13:57.377 }, 00:13:57.377 { 00:13:57.377 "name": "BaseBdev3", 00:13:57.377 "uuid": "d61c5548-db0b-4ced-83ac-800405eaaacd", 00:13:57.377 "is_configured": true, 00:13:57.377 "data_offset": 2048, 00:13:57.377 "data_size": 63488 00:13:57.377 } 00:13:57.377 ] 00:13:57.377 }' 00:13:57.377 12:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.377 12:12:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.954 12:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.954 12:12:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.954 12:12:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.954 12:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:57.954 12:12:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.954 12:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:57.954 12:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:57.954 12:12:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.954 12:12:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.954 [2024-11-25 12:12:53.925473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:57.954 BaseBdev1 00:13:57.954 12:12:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.954 12:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:57.954 12:12:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:57.954 12:12:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:57.954 12:12:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:57.954 12:12:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:57.954 12:12:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:57.954 12:12:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:57.954 12:12:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.954 12:12:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.954 12:12:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.954 12:12:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:57.954 12:12:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.954 12:12:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.954 [ 00:13:57.954 { 00:13:57.954 "name": "BaseBdev1", 00:13:57.954 "aliases": [ 00:13:57.954 "ef5c90d3-d624-4ae4-acc2-cd0ec4d0b829" 00:13:57.954 ], 00:13:57.954 "product_name": "Malloc disk", 00:13:57.954 "block_size": 512, 00:13:57.954 "num_blocks": 65536, 00:13:57.954 "uuid": "ef5c90d3-d624-4ae4-acc2-cd0ec4d0b829", 00:13:57.954 "assigned_rate_limits": { 00:13:57.954 "rw_ios_per_sec": 0, 00:13:57.954 "rw_mbytes_per_sec": 0, 00:13:57.954 "r_mbytes_per_sec": 0, 00:13:57.954 "w_mbytes_per_sec": 0 00:13:57.954 }, 00:13:57.954 "claimed": true, 00:13:57.954 "claim_type": "exclusive_write", 00:13:57.954 "zoned": false, 00:13:57.954 "supported_io_types": { 00:13:57.954 "read": true, 00:13:57.954 "write": true, 00:13:57.954 "unmap": true, 00:13:57.954 "flush": true, 00:13:57.954 "reset": true, 00:13:57.954 "nvme_admin": false, 00:13:57.954 "nvme_io": false, 00:13:57.954 "nvme_io_md": false, 00:13:57.954 "write_zeroes": true, 00:13:57.954 "zcopy": true, 00:13:57.954 "get_zone_info": false, 00:13:57.954 "zone_management": false, 00:13:57.954 "zone_append": false, 00:13:57.954 "compare": false, 00:13:57.954 "compare_and_write": false, 00:13:57.954 "abort": true, 00:13:57.954 "seek_hole": false, 00:13:57.954 "seek_data": false, 00:13:57.954 "copy": true, 00:13:57.954 "nvme_iov_md": false 00:13:57.954 }, 00:13:57.954 "memory_domains": [ 00:13:57.954 { 00:13:57.954 "dma_device_id": "system", 00:13:57.954 "dma_device_type": 1 00:13:57.954 }, 00:13:57.954 { 00:13:57.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:57.954 "dma_device_type": 2 00:13:57.954 } 00:13:57.954 ], 00:13:57.954 "driver_specific": {} 00:13:57.954 } 00:13:57.954 ] 00:13:57.954 12:12:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.954 12:12:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:57.954 12:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:57.954 12:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:57.954 12:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:57.954 12:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:57.954 12:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:57.954 12:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:57.954 12:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.954 12:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.954 12:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.954 12:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.954 12:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.954 12:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:57.954 12:12:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.954 12:12:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.954 12:12:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.954 12:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.954 "name": "Existed_Raid", 00:13:57.954 "uuid": "33fdbba9-ba83-40a8-a4b2-2369472296fc", 00:13:57.954 "strip_size_kb": 64, 00:13:57.954 "state": "configuring", 00:13:57.954 "raid_level": "concat", 00:13:57.954 "superblock": true, 00:13:57.954 "num_base_bdevs": 3, 00:13:57.954 "num_base_bdevs_discovered": 2, 00:13:57.954 "num_base_bdevs_operational": 3, 00:13:57.954 "base_bdevs_list": [ 00:13:57.954 { 00:13:57.954 "name": "BaseBdev1", 00:13:57.954 "uuid": "ef5c90d3-d624-4ae4-acc2-cd0ec4d0b829", 00:13:57.954 "is_configured": true, 00:13:57.954 "data_offset": 2048, 00:13:57.954 "data_size": 63488 00:13:57.954 }, 00:13:57.954 { 00:13:57.955 "name": null, 00:13:57.955 "uuid": "50502567-695a-4e7c-9593-cb362fd99a5d", 00:13:57.955 "is_configured": false, 00:13:57.955 "data_offset": 0, 00:13:57.955 "data_size": 63488 00:13:57.955 }, 00:13:57.955 { 00:13:57.955 "name": "BaseBdev3", 00:13:57.955 "uuid": "d61c5548-db0b-4ced-83ac-800405eaaacd", 00:13:57.955 "is_configured": true, 00:13:57.955 "data_offset": 2048, 00:13:57.955 "data_size": 63488 00:13:57.955 } 00:13:57.955 ] 00:13:57.955 }' 00:13:57.955 12:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.955 12:12:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.522 12:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.522 12:12:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.522 12:12:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.522 12:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:58.522 12:12:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.522 12:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:58.522 12:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:58.522 12:12:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.522 12:12:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.522 [2024-11-25 12:12:54.517697] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:58.522 12:12:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.522 12:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:58.522 12:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:58.522 12:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:58.522 12:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:58.522 12:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:58.522 12:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:58.522 12:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.522 12:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.522 12:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.522 12:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.522 12:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.522 12:12:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.522 12:12:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.522 12:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:58.523 12:12:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.523 12:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.523 "name": "Existed_Raid", 00:13:58.523 "uuid": "33fdbba9-ba83-40a8-a4b2-2369472296fc", 00:13:58.523 "strip_size_kb": 64, 00:13:58.523 "state": "configuring", 00:13:58.523 "raid_level": "concat", 00:13:58.523 "superblock": true, 00:13:58.523 "num_base_bdevs": 3, 00:13:58.523 "num_base_bdevs_discovered": 1, 00:13:58.523 "num_base_bdevs_operational": 3, 00:13:58.523 "base_bdevs_list": [ 00:13:58.523 { 00:13:58.523 "name": "BaseBdev1", 00:13:58.523 "uuid": "ef5c90d3-d624-4ae4-acc2-cd0ec4d0b829", 00:13:58.523 "is_configured": true, 00:13:58.523 "data_offset": 2048, 00:13:58.523 "data_size": 63488 00:13:58.523 }, 00:13:58.523 { 00:13:58.523 "name": null, 00:13:58.523 "uuid": "50502567-695a-4e7c-9593-cb362fd99a5d", 00:13:58.523 "is_configured": false, 00:13:58.523 "data_offset": 0, 00:13:58.523 "data_size": 63488 00:13:58.523 }, 00:13:58.523 { 00:13:58.523 "name": null, 00:13:58.523 "uuid": "d61c5548-db0b-4ced-83ac-800405eaaacd", 00:13:58.523 "is_configured": false, 00:13:58.523 "data_offset": 0, 00:13:58.523 "data_size": 63488 00:13:58.523 } 00:13:58.523 ] 00:13:58.523 }' 00:13:58.523 12:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.523 12:12:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.091 12:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.091 12:12:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.091 12:12:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.091 12:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:59.091 12:12:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.091 12:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:59.091 12:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:59.091 12:12:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.091 12:12:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.091 [2024-11-25 12:12:55.077890] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:59.091 12:12:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.091 12:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:59.091 12:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:59.091 12:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:59.091 12:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:59.091 12:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:59.091 12:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:59.091 12:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.091 12:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.091 12:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.091 12:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.091 12:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:59.091 12:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.091 12:12:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.091 12:12:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.091 12:12:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.091 12:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.091 "name": "Existed_Raid", 00:13:59.091 "uuid": "33fdbba9-ba83-40a8-a4b2-2369472296fc", 00:13:59.091 "strip_size_kb": 64, 00:13:59.091 "state": "configuring", 00:13:59.091 "raid_level": "concat", 00:13:59.091 "superblock": true, 00:13:59.091 "num_base_bdevs": 3, 00:13:59.091 "num_base_bdevs_discovered": 2, 00:13:59.091 "num_base_bdevs_operational": 3, 00:13:59.091 "base_bdevs_list": [ 00:13:59.091 { 00:13:59.091 "name": "BaseBdev1", 00:13:59.091 "uuid": "ef5c90d3-d624-4ae4-acc2-cd0ec4d0b829", 00:13:59.091 "is_configured": true, 00:13:59.091 "data_offset": 2048, 00:13:59.091 "data_size": 63488 00:13:59.091 }, 00:13:59.091 { 00:13:59.091 "name": null, 00:13:59.091 "uuid": "50502567-695a-4e7c-9593-cb362fd99a5d", 00:13:59.091 "is_configured": false, 00:13:59.091 "data_offset": 0, 00:13:59.091 "data_size": 63488 00:13:59.091 }, 00:13:59.091 { 00:13:59.091 "name": "BaseBdev3", 00:13:59.091 "uuid": "d61c5548-db0b-4ced-83ac-800405eaaacd", 00:13:59.091 "is_configured": true, 00:13:59.091 "data_offset": 2048, 00:13:59.091 "data_size": 63488 00:13:59.091 } 00:13:59.091 ] 00:13:59.091 }' 00:13:59.091 12:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.091 12:12:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.657 12:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.657 12:12:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.657 12:12:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.657 12:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:59.657 12:12:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.657 12:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:59.657 12:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:59.657 12:12:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.657 12:12:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.657 [2024-11-25 12:12:55.650114] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:59.657 12:12:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.657 12:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:59.657 12:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:59.657 12:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:59.657 12:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:59.657 12:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:59.658 12:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:59.658 12:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.658 12:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.658 12:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.658 12:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.658 12:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.658 12:12:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.658 12:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:59.658 12:12:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.917 12:12:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.917 12:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.917 "name": "Existed_Raid", 00:13:59.917 "uuid": "33fdbba9-ba83-40a8-a4b2-2369472296fc", 00:13:59.917 "strip_size_kb": 64, 00:13:59.917 "state": "configuring", 00:13:59.917 "raid_level": "concat", 00:13:59.917 "superblock": true, 00:13:59.917 "num_base_bdevs": 3, 00:13:59.917 "num_base_bdevs_discovered": 1, 00:13:59.917 "num_base_bdevs_operational": 3, 00:13:59.917 "base_bdevs_list": [ 00:13:59.917 { 00:13:59.917 "name": null, 00:13:59.917 "uuid": "ef5c90d3-d624-4ae4-acc2-cd0ec4d0b829", 00:13:59.917 "is_configured": false, 00:13:59.917 "data_offset": 0, 00:13:59.917 "data_size": 63488 00:13:59.917 }, 00:13:59.917 { 00:13:59.917 "name": null, 00:13:59.917 "uuid": "50502567-695a-4e7c-9593-cb362fd99a5d", 00:13:59.917 "is_configured": false, 00:13:59.917 "data_offset": 0, 00:13:59.917 "data_size": 63488 00:13:59.917 }, 00:13:59.917 { 00:13:59.917 "name": "BaseBdev3", 00:13:59.917 "uuid": "d61c5548-db0b-4ced-83ac-800405eaaacd", 00:13:59.917 "is_configured": true, 00:13:59.917 "data_offset": 2048, 00:13:59.917 "data_size": 63488 00:13:59.917 } 00:13:59.917 ] 00:13:59.917 }' 00:13:59.917 12:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.917 12:12:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.175 12:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.175 12:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:00.176 12:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.176 12:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.176 12:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.434 12:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:00.434 12:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:00.434 12:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.434 12:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.434 [2024-11-25 12:12:56.284623] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:00.434 12:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.434 12:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:00.434 12:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:00.434 12:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:00.434 12:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:00.434 12:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:00.434 12:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:00.434 12:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.434 12:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.434 12:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.434 12:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.434 12:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.434 12:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:00.434 12:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.434 12:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.434 12:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.434 12:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.434 "name": "Existed_Raid", 00:14:00.434 "uuid": "33fdbba9-ba83-40a8-a4b2-2369472296fc", 00:14:00.434 "strip_size_kb": 64, 00:14:00.434 "state": "configuring", 00:14:00.434 "raid_level": "concat", 00:14:00.434 "superblock": true, 00:14:00.434 "num_base_bdevs": 3, 00:14:00.434 "num_base_bdevs_discovered": 2, 00:14:00.434 "num_base_bdevs_operational": 3, 00:14:00.434 "base_bdevs_list": [ 00:14:00.434 { 00:14:00.434 "name": null, 00:14:00.435 "uuid": "ef5c90d3-d624-4ae4-acc2-cd0ec4d0b829", 00:14:00.435 "is_configured": false, 00:14:00.435 "data_offset": 0, 00:14:00.435 "data_size": 63488 00:14:00.435 }, 00:14:00.435 { 00:14:00.435 "name": "BaseBdev2", 00:14:00.435 "uuid": "50502567-695a-4e7c-9593-cb362fd99a5d", 00:14:00.435 "is_configured": true, 00:14:00.435 "data_offset": 2048, 00:14:00.435 "data_size": 63488 00:14:00.435 }, 00:14:00.435 { 00:14:00.435 "name": "BaseBdev3", 00:14:00.435 "uuid": "d61c5548-db0b-4ced-83ac-800405eaaacd", 00:14:00.435 "is_configured": true, 00:14:00.435 "data_offset": 2048, 00:14:00.435 "data_size": 63488 00:14:00.435 } 00:14:00.435 ] 00:14:00.435 }' 00:14:00.435 12:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.435 12:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.002 12:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.002 12:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:01.002 12:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.002 12:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.002 12:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.002 12:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:01.002 12:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.002 12:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.002 12:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:01.002 12:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.002 12:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.002 12:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ef5c90d3-d624-4ae4-acc2-cd0ec4d0b829 00:14:01.002 12:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.002 12:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.002 [2024-11-25 12:12:57.042674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:01.002 [2024-11-25 12:12:57.043170] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:01.002 [2024-11-25 12:12:57.043204] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:01.002 NewBaseBdev 00:14:01.002 [2024-11-25 12:12:57.043542] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:01.002 [2024-11-25 12:12:57.043731] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:01.002 [2024-11-25 12:12:57.043748] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:01.002 [2024-11-25 12:12:57.043915] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:01.002 12:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.002 12:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:01.002 12:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:01.002 12:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:01.002 12:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:01.002 12:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:01.002 12:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:01.002 12:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:01.002 12:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.002 12:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.002 12:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.002 12:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:01.002 12:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.002 12:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.002 [ 00:14:01.002 { 00:14:01.002 "name": "NewBaseBdev", 00:14:01.002 "aliases": [ 00:14:01.002 "ef5c90d3-d624-4ae4-acc2-cd0ec4d0b829" 00:14:01.002 ], 00:14:01.002 "product_name": "Malloc disk", 00:14:01.002 "block_size": 512, 00:14:01.002 "num_blocks": 65536, 00:14:01.002 "uuid": "ef5c90d3-d624-4ae4-acc2-cd0ec4d0b829", 00:14:01.002 "assigned_rate_limits": { 00:14:01.002 "rw_ios_per_sec": 0, 00:14:01.002 "rw_mbytes_per_sec": 0, 00:14:01.002 "r_mbytes_per_sec": 0, 00:14:01.002 "w_mbytes_per_sec": 0 00:14:01.002 }, 00:14:01.002 "claimed": true, 00:14:01.002 "claim_type": "exclusive_write", 00:14:01.002 "zoned": false, 00:14:01.002 "supported_io_types": { 00:14:01.002 "read": true, 00:14:01.002 "write": true, 00:14:01.002 "unmap": true, 00:14:01.002 "flush": true, 00:14:01.003 "reset": true, 00:14:01.003 "nvme_admin": false, 00:14:01.003 "nvme_io": false, 00:14:01.003 "nvme_io_md": false, 00:14:01.003 "write_zeroes": true, 00:14:01.003 "zcopy": true, 00:14:01.003 "get_zone_info": false, 00:14:01.003 "zone_management": false, 00:14:01.003 "zone_append": false, 00:14:01.003 "compare": false, 00:14:01.003 "compare_and_write": false, 00:14:01.003 "abort": true, 00:14:01.003 "seek_hole": false, 00:14:01.003 "seek_data": false, 00:14:01.003 "copy": true, 00:14:01.003 "nvme_iov_md": false 00:14:01.003 }, 00:14:01.003 "memory_domains": [ 00:14:01.003 { 00:14:01.003 "dma_device_id": "system", 00:14:01.003 "dma_device_type": 1 00:14:01.003 }, 00:14:01.003 { 00:14:01.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:01.003 "dma_device_type": 2 00:14:01.003 } 00:14:01.003 ], 00:14:01.003 "driver_specific": {} 00:14:01.003 } 00:14:01.003 ] 00:14:01.003 12:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.003 12:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:01.003 12:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:14:01.003 12:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:01.003 12:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:01.003 12:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:01.003 12:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:01.003 12:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:01.003 12:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.003 12:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.003 12:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.003 12:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.003 12:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.003 12:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.003 12:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:01.003 12:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.261 12:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.261 12:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.261 "name": "Existed_Raid", 00:14:01.261 "uuid": "33fdbba9-ba83-40a8-a4b2-2369472296fc", 00:14:01.261 "strip_size_kb": 64, 00:14:01.261 "state": "online", 00:14:01.261 "raid_level": "concat", 00:14:01.261 "superblock": true, 00:14:01.261 "num_base_bdevs": 3, 00:14:01.261 "num_base_bdevs_discovered": 3, 00:14:01.261 "num_base_bdevs_operational": 3, 00:14:01.261 "base_bdevs_list": [ 00:14:01.261 { 00:14:01.261 "name": "NewBaseBdev", 00:14:01.261 "uuid": "ef5c90d3-d624-4ae4-acc2-cd0ec4d0b829", 00:14:01.261 "is_configured": true, 00:14:01.261 "data_offset": 2048, 00:14:01.261 "data_size": 63488 00:14:01.261 }, 00:14:01.261 { 00:14:01.261 "name": "BaseBdev2", 00:14:01.261 "uuid": "50502567-695a-4e7c-9593-cb362fd99a5d", 00:14:01.261 "is_configured": true, 00:14:01.261 "data_offset": 2048, 00:14:01.261 "data_size": 63488 00:14:01.261 }, 00:14:01.261 { 00:14:01.261 "name": "BaseBdev3", 00:14:01.261 "uuid": "d61c5548-db0b-4ced-83ac-800405eaaacd", 00:14:01.261 "is_configured": true, 00:14:01.261 "data_offset": 2048, 00:14:01.261 "data_size": 63488 00:14:01.261 } 00:14:01.261 ] 00:14:01.261 }' 00:14:01.261 12:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.261 12:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.519 12:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:01.519 12:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:01.519 12:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:01.519 12:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:01.519 12:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:01.519 12:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:01.519 12:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:01.519 12:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:01.519 12:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.519 12:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.519 [2024-11-25 12:12:57.599247] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:01.807 12:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.807 12:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:01.807 "name": "Existed_Raid", 00:14:01.807 "aliases": [ 00:14:01.807 "33fdbba9-ba83-40a8-a4b2-2369472296fc" 00:14:01.807 ], 00:14:01.807 "product_name": "Raid Volume", 00:14:01.808 "block_size": 512, 00:14:01.808 "num_blocks": 190464, 00:14:01.808 "uuid": "33fdbba9-ba83-40a8-a4b2-2369472296fc", 00:14:01.808 "assigned_rate_limits": { 00:14:01.808 "rw_ios_per_sec": 0, 00:14:01.808 "rw_mbytes_per_sec": 0, 00:14:01.808 "r_mbytes_per_sec": 0, 00:14:01.808 "w_mbytes_per_sec": 0 00:14:01.808 }, 00:14:01.808 "claimed": false, 00:14:01.808 "zoned": false, 00:14:01.808 "supported_io_types": { 00:14:01.808 "read": true, 00:14:01.808 "write": true, 00:14:01.808 "unmap": true, 00:14:01.808 "flush": true, 00:14:01.808 "reset": true, 00:14:01.808 "nvme_admin": false, 00:14:01.808 "nvme_io": false, 00:14:01.808 "nvme_io_md": false, 00:14:01.808 "write_zeroes": true, 00:14:01.808 "zcopy": false, 00:14:01.808 "get_zone_info": false, 00:14:01.808 "zone_management": false, 00:14:01.808 "zone_append": false, 00:14:01.808 "compare": false, 00:14:01.808 "compare_and_write": false, 00:14:01.808 "abort": false, 00:14:01.808 "seek_hole": false, 00:14:01.808 "seek_data": false, 00:14:01.808 "copy": false, 00:14:01.808 "nvme_iov_md": false 00:14:01.808 }, 00:14:01.808 "memory_domains": [ 00:14:01.808 { 00:14:01.808 "dma_device_id": "system", 00:14:01.808 "dma_device_type": 1 00:14:01.808 }, 00:14:01.808 { 00:14:01.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:01.808 "dma_device_type": 2 00:14:01.808 }, 00:14:01.808 { 00:14:01.808 "dma_device_id": "system", 00:14:01.808 "dma_device_type": 1 00:14:01.808 }, 00:14:01.808 { 00:14:01.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:01.808 "dma_device_type": 2 00:14:01.808 }, 00:14:01.808 { 00:14:01.808 "dma_device_id": "system", 00:14:01.808 "dma_device_type": 1 00:14:01.808 }, 00:14:01.808 { 00:14:01.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:01.808 "dma_device_type": 2 00:14:01.808 } 00:14:01.808 ], 00:14:01.808 "driver_specific": { 00:14:01.808 "raid": { 00:14:01.808 "uuid": "33fdbba9-ba83-40a8-a4b2-2369472296fc", 00:14:01.808 "strip_size_kb": 64, 00:14:01.808 "state": "online", 00:14:01.808 "raid_level": "concat", 00:14:01.808 "superblock": true, 00:14:01.808 "num_base_bdevs": 3, 00:14:01.808 "num_base_bdevs_discovered": 3, 00:14:01.808 "num_base_bdevs_operational": 3, 00:14:01.808 "base_bdevs_list": [ 00:14:01.808 { 00:14:01.808 "name": "NewBaseBdev", 00:14:01.808 "uuid": "ef5c90d3-d624-4ae4-acc2-cd0ec4d0b829", 00:14:01.808 "is_configured": true, 00:14:01.808 "data_offset": 2048, 00:14:01.808 "data_size": 63488 00:14:01.808 }, 00:14:01.808 { 00:14:01.808 "name": "BaseBdev2", 00:14:01.808 "uuid": "50502567-695a-4e7c-9593-cb362fd99a5d", 00:14:01.808 "is_configured": true, 00:14:01.808 "data_offset": 2048, 00:14:01.808 "data_size": 63488 00:14:01.808 }, 00:14:01.808 { 00:14:01.808 "name": "BaseBdev3", 00:14:01.808 "uuid": "d61c5548-db0b-4ced-83ac-800405eaaacd", 00:14:01.808 "is_configured": true, 00:14:01.808 "data_offset": 2048, 00:14:01.808 "data_size": 63488 00:14:01.808 } 00:14:01.808 ] 00:14:01.808 } 00:14:01.808 } 00:14:01.808 }' 00:14:01.808 12:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:01.808 12:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:01.808 BaseBdev2 00:14:01.808 BaseBdev3' 00:14:01.808 12:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:01.808 12:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:01.808 12:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:01.808 12:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:01.808 12:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.808 12:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.808 12:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:01.808 12:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.808 12:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:01.808 12:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:01.808 12:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:01.808 12:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:01.808 12:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:01.808 12:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.808 12:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.808 12:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.808 12:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:01.808 12:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:01.808 12:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:01.808 12:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:01.808 12:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:01.808 12:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.808 12:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.808 12:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.067 12:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:02.067 12:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:02.067 12:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:02.067 12:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.067 12:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.067 [2024-11-25 12:12:57.878945] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:02.067 [2024-11-25 12:12:57.878981] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:02.067 [2024-11-25 12:12:57.879086] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:02.067 [2024-11-25 12:12:57.879163] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:02.067 [2024-11-25 12:12:57.879184] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:02.067 12:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.067 12:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66279 00:14:02.067 12:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66279 ']' 00:14:02.067 12:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66279 00:14:02.067 12:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:02.067 12:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:02.067 12:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66279 00:14:02.067 killing process with pid 66279 00:14:02.067 12:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:02.067 12:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:02.067 12:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66279' 00:14:02.067 12:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66279 00:14:02.067 [2024-11-25 12:12:57.919008] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:02.067 12:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66279 00:14:02.325 [2024-11-25 12:12:58.193314] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:03.265 ************************************ 00:14:03.265 END TEST raid_state_function_test_sb 00:14:03.265 ************************************ 00:14:03.265 12:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:03.265 00:14:03.265 real 0m11.746s 00:14:03.265 user 0m19.537s 00:14:03.265 sys 0m1.565s 00:14:03.265 12:12:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:03.265 12:12:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.265 12:12:59 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:14:03.265 12:12:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:03.265 12:12:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:03.265 12:12:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:03.265 ************************************ 00:14:03.265 START TEST raid_superblock_test 00:14:03.265 ************************************ 00:14:03.265 12:12:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:14:03.266 12:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:14:03.266 12:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:14:03.266 12:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:03.266 12:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:03.266 12:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:03.266 12:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:03.266 12:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:03.266 12:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:03.266 12:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:03.266 12:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:03.266 12:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:03.266 12:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:03.266 12:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:03.266 12:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:14:03.266 12:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:14:03.266 12:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:14:03.266 12:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66916 00:14:03.266 12:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66916 00:14:03.266 12:12:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 66916 ']' 00:14:03.266 12:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:03.266 12:12:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:03.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:03.266 12:12:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:03.266 12:12:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:03.266 12:12:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:03.266 12:12:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.533 [2024-11-25 12:12:59.360276] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:14:03.533 [2024-11-25 12:12:59.360473] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66916 ] 00:14:03.533 [2024-11-25 12:12:59.537426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:03.792 [2024-11-25 12:12:59.674836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:04.051 [2024-11-25 12:12:59.895709] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:04.051 [2024-11-25 12:12:59.895892] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:04.310 12:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:04.310 12:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:14:04.310 12:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:04.310 12:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:04.310 12:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:04.310 12:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:04.310 12:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:04.310 12:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:04.310 12:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:04.310 12:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:04.310 12:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:04.310 12:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.310 12:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.569 malloc1 00:14:04.569 12:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.569 12:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:04.569 12:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.569 12:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.569 [2024-11-25 12:13:00.401648] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:04.569 [2024-11-25 12:13:00.401728] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.569 [2024-11-25 12:13:00.401760] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:04.569 [2024-11-25 12:13:00.401776] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.569 [2024-11-25 12:13:00.404586] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.569 [2024-11-25 12:13:00.404632] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:04.569 pt1 00:14:04.569 12:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.569 12:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:04.569 12:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:04.569 12:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:04.569 12:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:04.569 12:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:04.569 12:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:04.569 12:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:04.569 12:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:04.569 12:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:04.569 12:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.569 12:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.569 malloc2 00:14:04.569 12:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.569 12:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:04.569 12:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.569 12:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.569 [2024-11-25 12:13:00.449366] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:04.569 [2024-11-25 12:13:00.449435] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.569 [2024-11-25 12:13:00.449465] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:04.569 [2024-11-25 12:13:00.449480] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.569 [2024-11-25 12:13:00.452211] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.569 [2024-11-25 12:13:00.452409] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:04.569 pt2 00:14:04.569 12:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.569 12:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:04.569 12:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:04.569 12:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:04.569 12:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:04.569 12:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:04.569 12:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:04.569 12:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:04.569 12:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:04.569 12:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:04.569 12:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.569 12:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.569 malloc3 00:14:04.569 12:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.569 12:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:04.569 12:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.569 12:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.569 [2024-11-25 12:13:00.515261] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:04.569 [2024-11-25 12:13:00.515333] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.569 [2024-11-25 12:13:00.515386] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:04.569 [2024-11-25 12:13:00.515402] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.569 [2024-11-25 12:13:00.518161] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.569 [2024-11-25 12:13:00.518357] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:04.569 pt3 00:14:04.569 12:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.569 12:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:04.569 12:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:04.569 12:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:14:04.569 12:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.569 12:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.569 [2024-11-25 12:13:00.523369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:04.569 [2024-11-25 12:13:00.526023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:04.569 [2024-11-25 12:13:00.526243] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:04.569 [2024-11-25 12:13:00.526638] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:04.569 [2024-11-25 12:13:00.526771] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:04.569 [2024-11-25 12:13:00.527176] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:04.569 [2024-11-25 12:13:00.527431] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:04.569 [2024-11-25 12:13:00.527450] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:04.569 [2024-11-25 12:13:00.527719] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:04.569 12:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.569 12:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:14:04.569 12:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:04.569 12:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:04.569 12:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:04.569 12:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:04.569 12:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:04.569 12:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.569 12:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.569 12:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.569 12:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.570 12:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.570 12:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.570 12:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.570 12:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.570 12:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.570 12:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.570 "name": "raid_bdev1", 00:14:04.570 "uuid": "aae7b473-5eff-41d8-937b-00aed0d831f2", 00:14:04.570 "strip_size_kb": 64, 00:14:04.570 "state": "online", 00:14:04.570 "raid_level": "concat", 00:14:04.570 "superblock": true, 00:14:04.570 "num_base_bdevs": 3, 00:14:04.570 "num_base_bdevs_discovered": 3, 00:14:04.570 "num_base_bdevs_operational": 3, 00:14:04.570 "base_bdevs_list": [ 00:14:04.570 { 00:14:04.570 "name": "pt1", 00:14:04.570 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:04.570 "is_configured": true, 00:14:04.570 "data_offset": 2048, 00:14:04.570 "data_size": 63488 00:14:04.570 }, 00:14:04.570 { 00:14:04.570 "name": "pt2", 00:14:04.570 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:04.570 "is_configured": true, 00:14:04.570 "data_offset": 2048, 00:14:04.570 "data_size": 63488 00:14:04.570 }, 00:14:04.570 { 00:14:04.570 "name": "pt3", 00:14:04.570 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:04.570 "is_configured": true, 00:14:04.570 "data_offset": 2048, 00:14:04.570 "data_size": 63488 00:14:04.570 } 00:14:04.570 ] 00:14:04.570 }' 00:14:04.570 12:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.570 12:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.137 12:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:05.137 12:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:05.137 12:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:05.137 12:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:05.137 12:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:05.137 12:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:05.137 12:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:05.137 12:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:05.137 12:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.137 12:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.137 [2024-11-25 12:13:01.008314] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:05.137 12:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.137 12:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:05.137 "name": "raid_bdev1", 00:14:05.137 "aliases": [ 00:14:05.137 "aae7b473-5eff-41d8-937b-00aed0d831f2" 00:14:05.137 ], 00:14:05.137 "product_name": "Raid Volume", 00:14:05.137 "block_size": 512, 00:14:05.137 "num_blocks": 190464, 00:14:05.137 "uuid": "aae7b473-5eff-41d8-937b-00aed0d831f2", 00:14:05.137 "assigned_rate_limits": { 00:14:05.137 "rw_ios_per_sec": 0, 00:14:05.137 "rw_mbytes_per_sec": 0, 00:14:05.137 "r_mbytes_per_sec": 0, 00:14:05.137 "w_mbytes_per_sec": 0 00:14:05.137 }, 00:14:05.137 "claimed": false, 00:14:05.137 "zoned": false, 00:14:05.137 "supported_io_types": { 00:14:05.137 "read": true, 00:14:05.137 "write": true, 00:14:05.137 "unmap": true, 00:14:05.137 "flush": true, 00:14:05.137 "reset": true, 00:14:05.137 "nvme_admin": false, 00:14:05.137 "nvme_io": false, 00:14:05.137 "nvme_io_md": false, 00:14:05.137 "write_zeroes": true, 00:14:05.137 "zcopy": false, 00:14:05.137 "get_zone_info": false, 00:14:05.137 "zone_management": false, 00:14:05.137 "zone_append": false, 00:14:05.137 "compare": false, 00:14:05.137 "compare_and_write": false, 00:14:05.137 "abort": false, 00:14:05.137 "seek_hole": false, 00:14:05.137 "seek_data": false, 00:14:05.137 "copy": false, 00:14:05.137 "nvme_iov_md": false 00:14:05.137 }, 00:14:05.137 "memory_domains": [ 00:14:05.137 { 00:14:05.137 "dma_device_id": "system", 00:14:05.137 "dma_device_type": 1 00:14:05.137 }, 00:14:05.137 { 00:14:05.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.137 "dma_device_type": 2 00:14:05.137 }, 00:14:05.137 { 00:14:05.137 "dma_device_id": "system", 00:14:05.137 "dma_device_type": 1 00:14:05.137 }, 00:14:05.137 { 00:14:05.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.137 "dma_device_type": 2 00:14:05.137 }, 00:14:05.137 { 00:14:05.137 "dma_device_id": "system", 00:14:05.137 "dma_device_type": 1 00:14:05.137 }, 00:14:05.137 { 00:14:05.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.137 "dma_device_type": 2 00:14:05.137 } 00:14:05.137 ], 00:14:05.137 "driver_specific": { 00:14:05.137 "raid": { 00:14:05.137 "uuid": "aae7b473-5eff-41d8-937b-00aed0d831f2", 00:14:05.137 "strip_size_kb": 64, 00:14:05.137 "state": "online", 00:14:05.137 "raid_level": "concat", 00:14:05.137 "superblock": true, 00:14:05.137 "num_base_bdevs": 3, 00:14:05.137 "num_base_bdevs_discovered": 3, 00:14:05.137 "num_base_bdevs_operational": 3, 00:14:05.137 "base_bdevs_list": [ 00:14:05.137 { 00:14:05.137 "name": "pt1", 00:14:05.137 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:05.137 "is_configured": true, 00:14:05.137 "data_offset": 2048, 00:14:05.137 "data_size": 63488 00:14:05.137 }, 00:14:05.137 { 00:14:05.137 "name": "pt2", 00:14:05.137 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:05.137 "is_configured": true, 00:14:05.137 "data_offset": 2048, 00:14:05.137 "data_size": 63488 00:14:05.137 }, 00:14:05.137 { 00:14:05.137 "name": "pt3", 00:14:05.137 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:05.137 "is_configured": true, 00:14:05.137 "data_offset": 2048, 00:14:05.137 "data_size": 63488 00:14:05.137 } 00:14:05.137 ] 00:14:05.137 } 00:14:05.137 } 00:14:05.137 }' 00:14:05.137 12:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:05.137 12:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:05.137 pt2 00:14:05.137 pt3' 00:14:05.137 12:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:05.137 12:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:05.137 12:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:05.137 12:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:05.137 12:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:05.137 12:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.137 12:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.137 12:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.137 12:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:05.137 12:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:05.137 12:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:05.137 12:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:05.137 12:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:05.137 12:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.137 12:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.137 12:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.397 12:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:05.397 12:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:05.397 12:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:05.397 12:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:05.397 12:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.397 12:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:05.397 12:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.397 12:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.397 12:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:05.397 12:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:05.397 12:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:05.397 12:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.397 12:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.397 12:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:05.397 [2024-11-25 12:13:01.324216] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:05.397 12:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.397 12:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=aae7b473-5eff-41d8-937b-00aed0d831f2 00:14:05.397 12:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z aae7b473-5eff-41d8-937b-00aed0d831f2 ']' 00:14:05.397 12:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:05.397 12:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.397 12:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.397 [2024-11-25 12:13:01.367878] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:05.397 [2024-11-25 12:13:01.367916] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:05.397 [2024-11-25 12:13:01.368025] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:05.397 [2024-11-25 12:13:01.368111] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:05.397 [2024-11-25 12:13:01.368128] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:05.397 12:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.397 12:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.397 12:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:05.397 12:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.397 12:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.397 12:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.397 12:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:05.397 12:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:05.397 12:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:05.397 12:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:05.397 12:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.397 12:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.397 12:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.397 12:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:05.397 12:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:05.397 12:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.397 12:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.397 12:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.397 12:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:05.397 12:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:05.397 12:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.397 12:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.397 12:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.397 12:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:05.397 12:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:05.397 12:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.397 12:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.656 12:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.656 12:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:05.656 12:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:05.656 12:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:14:05.656 12:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:05.656 12:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:05.656 12:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:05.656 12:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:05.656 12:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:05.656 12:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:05.656 12:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.656 12:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.656 [2024-11-25 12:13:01.520006] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:05.656 [2024-11-25 12:13:01.522610] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:05.656 [2024-11-25 12:13:01.522805] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:05.656 [2024-11-25 12:13:01.522926] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:05.656 [2024-11-25 12:13:01.523257] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:05.656 [2024-11-25 12:13:01.523479] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:05.656 [2024-11-25 12:13:01.523746] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:05.656 [2024-11-25 12:13:01.523799] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:14:05.656 request: 00:14:05.656 { 00:14:05.656 "name": "raid_bdev1", 00:14:05.656 "raid_level": "concat", 00:14:05.656 "base_bdevs": [ 00:14:05.656 "malloc1", 00:14:05.656 "malloc2", 00:14:05.656 "malloc3" 00:14:05.656 ], 00:14:05.656 "strip_size_kb": 64, 00:14:05.656 "superblock": false, 00:14:05.656 "method": "bdev_raid_create", 00:14:05.656 "req_id": 1 00:14:05.656 } 00:14:05.656 Got JSON-RPC error response 00:14:05.656 response: 00:14:05.656 { 00:14:05.656 "code": -17, 00:14:05.656 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:05.656 } 00:14:05.657 12:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:05.657 12:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:14:05.657 12:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:05.657 12:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:05.657 12:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:05.657 12:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.657 12:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:05.657 12:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.657 12:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.657 12:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.657 12:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:05.657 12:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:05.657 12:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:05.657 12:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.657 12:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.657 [2024-11-25 12:13:01.588319] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:05.657 [2024-11-25 12:13:01.588605] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:05.657 [2024-11-25 12:13:01.588799] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:05.657 [2024-11-25 12:13:01.588971] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:05.657 [2024-11-25 12:13:01.593215] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:05.657 [2024-11-25 12:13:01.593491] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:05.657 [2024-11-25 12:13:01.593849] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:05.657 [2024-11-25 12:13:01.594131] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:05.657 pt1 00:14:05.657 12:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.657 12:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:14:05.657 12:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:05.657 12:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:05.657 12:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:05.657 12:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:05.657 12:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:05.657 12:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.657 12:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.657 12:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.657 12:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.657 12:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.657 12:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.657 12:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.657 12:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.657 12:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.657 12:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.657 "name": "raid_bdev1", 00:14:05.657 "uuid": "aae7b473-5eff-41d8-937b-00aed0d831f2", 00:14:05.657 "strip_size_kb": 64, 00:14:05.657 "state": "configuring", 00:14:05.657 "raid_level": "concat", 00:14:05.657 "superblock": true, 00:14:05.657 "num_base_bdevs": 3, 00:14:05.657 "num_base_bdevs_discovered": 1, 00:14:05.657 "num_base_bdevs_operational": 3, 00:14:05.657 "base_bdevs_list": [ 00:14:05.657 { 00:14:05.657 "name": "pt1", 00:14:05.657 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:05.657 "is_configured": true, 00:14:05.657 "data_offset": 2048, 00:14:05.657 "data_size": 63488 00:14:05.657 }, 00:14:05.657 { 00:14:05.657 "name": null, 00:14:05.657 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:05.657 "is_configured": false, 00:14:05.657 "data_offset": 2048, 00:14:05.657 "data_size": 63488 00:14:05.657 }, 00:14:05.657 { 00:14:05.657 "name": null, 00:14:05.657 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:05.657 "is_configured": false, 00:14:05.657 "data_offset": 2048, 00:14:05.657 "data_size": 63488 00:14:05.657 } 00:14:05.657 ] 00:14:05.657 }' 00:14:05.657 12:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.657 12:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.224 12:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:14:06.224 12:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:06.224 12:13:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.224 12:13:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.224 [2024-11-25 12:13:02.094120] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:06.224 [2024-11-25 12:13:02.094202] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.224 [2024-11-25 12:13:02.094239] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:06.224 [2024-11-25 12:13:02.094255] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.224 [2024-11-25 12:13:02.094840] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.224 [2024-11-25 12:13:02.094884] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:06.224 [2024-11-25 12:13:02.094997] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:06.224 [2024-11-25 12:13:02.095030] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:06.224 pt2 00:14:06.224 12:13:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.224 12:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:06.224 12:13:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.224 12:13:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.224 [2024-11-25 12:13:02.102128] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:06.224 12:13:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.224 12:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:14:06.224 12:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:06.224 12:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:06.224 12:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:06.224 12:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:06.224 12:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:06.224 12:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.224 12:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.224 12:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.224 12:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.224 12:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.224 12:13:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.224 12:13:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.224 12:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.224 12:13:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.224 12:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.224 "name": "raid_bdev1", 00:14:06.224 "uuid": "aae7b473-5eff-41d8-937b-00aed0d831f2", 00:14:06.224 "strip_size_kb": 64, 00:14:06.224 "state": "configuring", 00:14:06.224 "raid_level": "concat", 00:14:06.224 "superblock": true, 00:14:06.224 "num_base_bdevs": 3, 00:14:06.224 "num_base_bdevs_discovered": 1, 00:14:06.224 "num_base_bdevs_operational": 3, 00:14:06.224 "base_bdevs_list": [ 00:14:06.224 { 00:14:06.224 "name": "pt1", 00:14:06.224 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:06.224 "is_configured": true, 00:14:06.224 "data_offset": 2048, 00:14:06.224 "data_size": 63488 00:14:06.224 }, 00:14:06.224 { 00:14:06.224 "name": null, 00:14:06.224 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:06.224 "is_configured": false, 00:14:06.224 "data_offset": 0, 00:14:06.224 "data_size": 63488 00:14:06.224 }, 00:14:06.224 { 00:14:06.224 "name": null, 00:14:06.224 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:06.224 "is_configured": false, 00:14:06.224 "data_offset": 2048, 00:14:06.224 "data_size": 63488 00:14:06.224 } 00:14:06.224 ] 00:14:06.224 }' 00:14:06.224 12:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.224 12:13:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.483 12:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:06.483 12:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:06.483 12:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:06.483 12:13:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.483 12:13:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.483 [2024-11-25 12:13:02.558215] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:06.483 [2024-11-25 12:13:02.558308] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.483 [2024-11-25 12:13:02.558349] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:14:06.483 [2024-11-25 12:13:02.558369] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.483 [2024-11-25 12:13:02.558953] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.483 [2024-11-25 12:13:02.558985] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:06.483 [2024-11-25 12:13:02.559092] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:06.483 [2024-11-25 12:13:02.559130] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:06.483 pt2 00:14:06.483 12:13:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.483 12:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:06.483 12:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:06.483 12:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:06.483 12:13:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.483 12:13:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.483 [2024-11-25 12:13:02.566243] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:06.483 [2024-11-25 12:13:02.566326] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.483 [2024-11-25 12:13:02.566367] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:06.483 [2024-11-25 12:13:02.566386] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.483 [2024-11-25 12:13:02.566940] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.483 [2024-11-25 12:13:02.566992] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:06.483 [2024-11-25 12:13:02.567097] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:06.483 [2024-11-25 12:13:02.567133] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:06.483 [2024-11-25 12:13:02.567292] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:06.483 [2024-11-25 12:13:02.567313] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:06.483 [2024-11-25 12:13:02.567656] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:06.483 [2024-11-25 12:13:02.567854] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:06.483 [2024-11-25 12:13:02.567870] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:06.483 [2024-11-25 12:13:02.568044] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:06.483 pt3 00:14:06.483 12:13:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.483 12:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:06.483 12:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:06.483 12:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:14:06.742 12:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:06.742 12:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:06.742 12:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:06.742 12:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:06.742 12:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:06.742 12:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.742 12:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.742 12:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.742 12:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.742 12:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.742 12:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.742 12:13:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.742 12:13:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.742 12:13:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.742 12:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.742 "name": "raid_bdev1", 00:14:06.742 "uuid": "aae7b473-5eff-41d8-937b-00aed0d831f2", 00:14:06.742 "strip_size_kb": 64, 00:14:06.742 "state": "online", 00:14:06.742 "raid_level": "concat", 00:14:06.742 "superblock": true, 00:14:06.742 "num_base_bdevs": 3, 00:14:06.742 "num_base_bdevs_discovered": 3, 00:14:06.742 "num_base_bdevs_operational": 3, 00:14:06.742 "base_bdevs_list": [ 00:14:06.742 { 00:14:06.742 "name": "pt1", 00:14:06.742 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:06.742 "is_configured": true, 00:14:06.742 "data_offset": 2048, 00:14:06.742 "data_size": 63488 00:14:06.743 }, 00:14:06.743 { 00:14:06.743 "name": "pt2", 00:14:06.743 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:06.743 "is_configured": true, 00:14:06.743 "data_offset": 2048, 00:14:06.743 "data_size": 63488 00:14:06.743 }, 00:14:06.743 { 00:14:06.743 "name": "pt3", 00:14:06.743 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:06.743 "is_configured": true, 00:14:06.743 "data_offset": 2048, 00:14:06.743 "data_size": 63488 00:14:06.743 } 00:14:06.743 ] 00:14:06.743 }' 00:14:06.743 12:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.743 12:13:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.310 12:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:07.310 12:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:07.310 12:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:07.310 12:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:07.310 12:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:07.310 12:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:07.310 12:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:07.310 12:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:07.310 12:13:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.310 12:13:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.310 [2024-11-25 12:13:03.143009] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:07.310 12:13:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.310 12:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:07.310 "name": "raid_bdev1", 00:14:07.310 "aliases": [ 00:14:07.310 "aae7b473-5eff-41d8-937b-00aed0d831f2" 00:14:07.310 ], 00:14:07.310 "product_name": "Raid Volume", 00:14:07.310 "block_size": 512, 00:14:07.310 "num_blocks": 190464, 00:14:07.310 "uuid": "aae7b473-5eff-41d8-937b-00aed0d831f2", 00:14:07.310 "assigned_rate_limits": { 00:14:07.310 "rw_ios_per_sec": 0, 00:14:07.310 "rw_mbytes_per_sec": 0, 00:14:07.310 "r_mbytes_per_sec": 0, 00:14:07.310 "w_mbytes_per_sec": 0 00:14:07.310 }, 00:14:07.310 "claimed": false, 00:14:07.310 "zoned": false, 00:14:07.310 "supported_io_types": { 00:14:07.310 "read": true, 00:14:07.310 "write": true, 00:14:07.310 "unmap": true, 00:14:07.310 "flush": true, 00:14:07.310 "reset": true, 00:14:07.310 "nvme_admin": false, 00:14:07.310 "nvme_io": false, 00:14:07.310 "nvme_io_md": false, 00:14:07.310 "write_zeroes": true, 00:14:07.310 "zcopy": false, 00:14:07.310 "get_zone_info": false, 00:14:07.310 "zone_management": false, 00:14:07.310 "zone_append": false, 00:14:07.310 "compare": false, 00:14:07.310 "compare_and_write": false, 00:14:07.310 "abort": false, 00:14:07.310 "seek_hole": false, 00:14:07.310 "seek_data": false, 00:14:07.310 "copy": false, 00:14:07.310 "nvme_iov_md": false 00:14:07.310 }, 00:14:07.310 "memory_domains": [ 00:14:07.310 { 00:14:07.310 "dma_device_id": "system", 00:14:07.310 "dma_device_type": 1 00:14:07.310 }, 00:14:07.310 { 00:14:07.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:07.310 "dma_device_type": 2 00:14:07.310 }, 00:14:07.310 { 00:14:07.310 "dma_device_id": "system", 00:14:07.310 "dma_device_type": 1 00:14:07.310 }, 00:14:07.310 { 00:14:07.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:07.310 "dma_device_type": 2 00:14:07.310 }, 00:14:07.310 { 00:14:07.310 "dma_device_id": "system", 00:14:07.310 "dma_device_type": 1 00:14:07.310 }, 00:14:07.310 { 00:14:07.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:07.310 "dma_device_type": 2 00:14:07.310 } 00:14:07.310 ], 00:14:07.310 "driver_specific": { 00:14:07.310 "raid": { 00:14:07.310 "uuid": "aae7b473-5eff-41d8-937b-00aed0d831f2", 00:14:07.310 "strip_size_kb": 64, 00:14:07.310 "state": "online", 00:14:07.310 "raid_level": "concat", 00:14:07.310 "superblock": true, 00:14:07.310 "num_base_bdevs": 3, 00:14:07.310 "num_base_bdevs_discovered": 3, 00:14:07.310 "num_base_bdevs_operational": 3, 00:14:07.310 "base_bdevs_list": [ 00:14:07.310 { 00:14:07.310 "name": "pt1", 00:14:07.310 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:07.310 "is_configured": true, 00:14:07.310 "data_offset": 2048, 00:14:07.310 "data_size": 63488 00:14:07.310 }, 00:14:07.310 { 00:14:07.310 "name": "pt2", 00:14:07.310 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:07.310 "is_configured": true, 00:14:07.310 "data_offset": 2048, 00:14:07.310 "data_size": 63488 00:14:07.310 }, 00:14:07.310 { 00:14:07.310 "name": "pt3", 00:14:07.310 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:07.310 "is_configured": true, 00:14:07.310 "data_offset": 2048, 00:14:07.310 "data_size": 63488 00:14:07.311 } 00:14:07.311 ] 00:14:07.311 } 00:14:07.311 } 00:14:07.311 }' 00:14:07.311 12:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:07.311 12:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:07.311 pt2 00:14:07.311 pt3' 00:14:07.311 12:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:07.311 12:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:07.311 12:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:07.311 12:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:07.311 12:13:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.311 12:13:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.311 12:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:07.311 12:13:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.311 12:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:07.311 12:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:07.311 12:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:07.311 12:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:07.311 12:13:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.311 12:13:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.311 12:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:07.311 12:13:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.569 12:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:07.569 12:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:07.569 12:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:07.569 12:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:07.569 12:13:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.569 12:13:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.569 12:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:07.569 12:13:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.569 12:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:07.569 12:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:07.569 12:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:07.569 12:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:07.569 12:13:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.569 12:13:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.569 [2024-11-25 12:13:03.478894] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:07.569 12:13:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.569 12:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' aae7b473-5eff-41d8-937b-00aed0d831f2 '!=' aae7b473-5eff-41d8-937b-00aed0d831f2 ']' 00:14:07.569 12:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:14:07.569 12:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:07.569 12:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:07.569 12:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66916 00:14:07.569 12:13:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 66916 ']' 00:14:07.569 12:13:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 66916 00:14:07.570 12:13:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:14:07.570 12:13:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:07.570 12:13:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66916 00:14:07.570 12:13:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:07.570 12:13:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:07.570 12:13:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66916' 00:14:07.570 killing process with pid 66916 00:14:07.570 12:13:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 66916 00:14:07.570 [2024-11-25 12:13:03.549687] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:07.570 12:13:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 66916 00:14:07.570 [2024-11-25 12:13:03.550001] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:07.570 [2024-11-25 12:13:03.550220] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:07.570 [2024-11-25 12:13:03.550359] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:07.828 [2024-11-25 12:13:03.839723] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:09.207 12:13:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:09.207 00:14:09.207 real 0m5.598s 00:14:09.207 user 0m8.411s 00:14:09.207 sys 0m0.782s 00:14:09.207 ************************************ 00:14:09.207 END TEST raid_superblock_test 00:14:09.207 ************************************ 00:14:09.207 12:13:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:09.207 12:13:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.207 12:13:04 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:14:09.207 12:13:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:09.207 12:13:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:09.207 12:13:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:09.207 ************************************ 00:14:09.207 START TEST raid_read_error_test 00:14:09.207 ************************************ 00:14:09.207 12:13:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:14:09.207 12:13:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:14:09.207 12:13:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:14:09.207 12:13:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:14:09.207 12:13:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:09.207 12:13:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:09.207 12:13:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:09.207 12:13:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:09.207 12:13:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:09.207 12:13:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:09.207 12:13:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:09.207 12:13:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:09.207 12:13:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:09.207 12:13:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:09.207 12:13:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:09.207 12:13:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:09.207 12:13:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:09.207 12:13:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:09.207 12:13:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:09.207 12:13:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:09.207 12:13:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:09.207 12:13:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:09.207 12:13:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:14:09.207 12:13:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:14:09.207 12:13:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:14:09.207 12:13:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:09.207 12:13:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Th2fy0ao6A 00:14:09.207 12:13:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67169 00:14:09.207 12:13:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67169 00:14:09.207 12:13:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 67169 ']' 00:14:09.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:09.207 12:13:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:09.207 12:13:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:09.207 12:13:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:09.207 12:13:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:09.207 12:13:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:09.207 12:13:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.208 [2024-11-25 12:13:05.035122] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:14:09.208 [2024-11-25 12:13:05.035498] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67169 ] 00:14:09.208 [2024-11-25 12:13:05.220950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.467 [2024-11-25 12:13:05.351555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.731 [2024-11-25 12:13:05.557714] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:09.731 [2024-11-25 12:13:05.557996] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:10.001 12:13:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:10.001 12:13:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:14:10.001 12:13:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:10.001 12:13:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:10.001 12:13:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.001 12:13:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.001 BaseBdev1_malloc 00:14:10.001 12:13:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.001 12:13:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:10.001 12:13:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.001 12:13:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.001 true 00:14:10.001 12:13:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.001 12:13:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:10.001 12:13:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.001 12:13:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.001 [2024-11-25 12:13:06.084165] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:10.001 [2024-11-25 12:13:06.084399] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:10.001 [2024-11-25 12:13:06.084445] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:10.001 [2024-11-25 12:13:06.084464] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:10.001 [2024-11-25 12:13:06.087266] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:10.001 [2024-11-25 12:13:06.087319] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:10.001 BaseBdev1 00:14:10.001 12:13:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.001 12:13:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:10.001 12:13:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:10.001 12:13:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.001 12:13:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.261 BaseBdev2_malloc 00:14:10.261 12:13:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.261 12:13:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:10.261 12:13:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.261 12:13:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.261 true 00:14:10.261 12:13:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.261 12:13:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:10.261 12:13:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.261 12:13:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.261 [2024-11-25 12:13:06.140260] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:10.261 [2024-11-25 12:13:06.140496] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:10.261 [2024-11-25 12:13:06.140538] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:10.261 [2024-11-25 12:13:06.140557] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:10.261 [2024-11-25 12:13:06.143372] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:10.261 [2024-11-25 12:13:06.143420] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:10.261 BaseBdev2 00:14:10.261 12:13:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.261 12:13:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:10.261 12:13:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:10.261 12:13:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.261 12:13:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.261 BaseBdev3_malloc 00:14:10.261 12:13:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.261 12:13:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:10.261 12:13:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.261 12:13:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.261 true 00:14:10.261 12:13:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.261 12:13:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:10.261 12:13:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.261 12:13:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.261 [2024-11-25 12:13:06.207377] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:10.261 [2024-11-25 12:13:06.207446] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:10.261 [2024-11-25 12:13:06.207479] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:10.261 [2024-11-25 12:13:06.207497] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:10.261 [2024-11-25 12:13:06.210319] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:10.261 [2024-11-25 12:13:06.210383] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:10.261 BaseBdev3 00:14:10.261 12:13:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.261 12:13:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:14:10.261 12:13:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.261 12:13:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.261 [2024-11-25 12:13:06.215478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:10.261 [2024-11-25 12:13:06.217911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:10.261 [2024-11-25 12:13:06.218039] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:10.261 [2024-11-25 12:13:06.218327] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:10.261 [2024-11-25 12:13:06.218367] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:10.261 [2024-11-25 12:13:06.218711] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:14:10.261 [2024-11-25 12:13:06.218934] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:10.261 [2024-11-25 12:13:06.218957] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:14:10.261 [2024-11-25 12:13:06.219158] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:10.261 12:13:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.261 12:13:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:14:10.261 12:13:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:10.261 12:13:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:10.261 12:13:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:10.261 12:13:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:10.261 12:13:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:10.261 12:13:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.261 12:13:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.261 12:13:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.261 12:13:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.261 12:13:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.261 12:13:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.261 12:13:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.261 12:13:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.261 12:13:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.261 12:13:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.261 "name": "raid_bdev1", 00:14:10.261 "uuid": "3558fef9-58b9-4e88-95ea-5932532c782f", 00:14:10.261 "strip_size_kb": 64, 00:14:10.261 "state": "online", 00:14:10.261 "raid_level": "concat", 00:14:10.261 "superblock": true, 00:14:10.261 "num_base_bdevs": 3, 00:14:10.261 "num_base_bdevs_discovered": 3, 00:14:10.261 "num_base_bdevs_operational": 3, 00:14:10.261 "base_bdevs_list": [ 00:14:10.261 { 00:14:10.261 "name": "BaseBdev1", 00:14:10.261 "uuid": "4e14098d-e5e5-5b53-b2d1-a1ccb4d081e6", 00:14:10.261 "is_configured": true, 00:14:10.261 "data_offset": 2048, 00:14:10.261 "data_size": 63488 00:14:10.261 }, 00:14:10.261 { 00:14:10.261 "name": "BaseBdev2", 00:14:10.261 "uuid": "2e4929af-118a-5666-8160-25cb2e7b6167", 00:14:10.261 "is_configured": true, 00:14:10.261 "data_offset": 2048, 00:14:10.261 "data_size": 63488 00:14:10.261 }, 00:14:10.261 { 00:14:10.261 "name": "BaseBdev3", 00:14:10.261 "uuid": "8103687d-dac6-582b-9274-db95af874279", 00:14:10.261 "is_configured": true, 00:14:10.261 "data_offset": 2048, 00:14:10.261 "data_size": 63488 00:14:10.261 } 00:14:10.261 ] 00:14:10.261 }' 00:14:10.261 12:13:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.261 12:13:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.827 12:13:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:10.828 12:13:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:10.828 [2024-11-25 12:13:06.820997] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:14:11.763 12:13:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:14:11.763 12:13:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.763 12:13:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.763 12:13:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.763 12:13:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:11.763 12:13:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:14:11.763 12:13:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:14:11.763 12:13:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:14:11.763 12:13:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:11.763 12:13:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:11.763 12:13:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:11.763 12:13:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:11.763 12:13:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:11.763 12:13:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.763 12:13:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.763 12:13:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.763 12:13:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.763 12:13:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.763 12:13:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.763 12:13:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.763 12:13:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.763 12:13:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.763 12:13:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.763 "name": "raid_bdev1", 00:14:11.763 "uuid": "3558fef9-58b9-4e88-95ea-5932532c782f", 00:14:11.763 "strip_size_kb": 64, 00:14:11.763 "state": "online", 00:14:11.763 "raid_level": "concat", 00:14:11.763 "superblock": true, 00:14:11.763 "num_base_bdevs": 3, 00:14:11.763 "num_base_bdevs_discovered": 3, 00:14:11.763 "num_base_bdevs_operational": 3, 00:14:11.763 "base_bdevs_list": [ 00:14:11.763 { 00:14:11.763 "name": "BaseBdev1", 00:14:11.763 "uuid": "4e14098d-e5e5-5b53-b2d1-a1ccb4d081e6", 00:14:11.763 "is_configured": true, 00:14:11.763 "data_offset": 2048, 00:14:11.763 "data_size": 63488 00:14:11.763 }, 00:14:11.763 { 00:14:11.763 "name": "BaseBdev2", 00:14:11.763 "uuid": "2e4929af-118a-5666-8160-25cb2e7b6167", 00:14:11.763 "is_configured": true, 00:14:11.763 "data_offset": 2048, 00:14:11.763 "data_size": 63488 00:14:11.763 }, 00:14:11.763 { 00:14:11.763 "name": "BaseBdev3", 00:14:11.763 "uuid": "8103687d-dac6-582b-9274-db95af874279", 00:14:11.763 "is_configured": true, 00:14:11.763 "data_offset": 2048, 00:14:11.763 "data_size": 63488 00:14:11.763 } 00:14:11.763 ] 00:14:11.763 }' 00:14:11.763 12:13:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.763 12:13:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.331 12:13:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:12.331 12:13:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.331 12:13:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.331 [2024-11-25 12:13:08.256678] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:12.331 [2024-11-25 12:13:08.256714] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:12.331 [2024-11-25 12:13:08.260064] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:12.331 [2024-11-25 12:13:08.260258] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:12.331 [2024-11-25 12:13:08.260329] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:12.331 [2024-11-25 12:13:08.260371] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:14:12.331 { 00:14:12.331 "results": [ 00:14:12.331 { 00:14:12.331 "job": "raid_bdev1", 00:14:12.331 "core_mask": "0x1", 00:14:12.331 "workload": "randrw", 00:14:12.331 "percentage": 50, 00:14:12.331 "status": "finished", 00:14:12.331 "queue_depth": 1, 00:14:12.331 "io_size": 131072, 00:14:12.331 "runtime": 1.433062, 00:14:12.331 "iops": 10659.692323151407, 00:14:12.331 "mibps": 1332.4615403939258, 00:14:12.331 "io_failed": 1, 00:14:12.331 "io_timeout": 0, 00:14:12.331 "avg_latency_us": 131.3564857450594, 00:14:12.331 "min_latency_us": 42.123636363636365, 00:14:12.331 "max_latency_us": 1832.0290909090909 00:14:12.331 } 00:14:12.331 ], 00:14:12.331 "core_count": 1 00:14:12.331 } 00:14:12.331 12:13:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.331 12:13:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67169 00:14:12.331 12:13:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 67169 ']' 00:14:12.331 12:13:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 67169 00:14:12.331 12:13:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:14:12.331 12:13:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:12.331 12:13:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67169 00:14:12.331 killing process with pid 67169 00:14:12.331 12:13:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:12.331 12:13:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:12.331 12:13:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67169' 00:14:12.331 12:13:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 67169 00:14:12.331 [2024-11-25 12:13:08.298421] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:12.331 12:13:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 67169 00:14:12.589 [2024-11-25 12:13:08.506942] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:13.963 12:13:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Th2fy0ao6A 00:14:13.963 12:13:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:13.963 12:13:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:13.963 12:13:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:14:13.963 12:13:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:14:13.963 ************************************ 00:14:13.963 END TEST raid_read_error_test 00:14:13.963 ************************************ 00:14:13.963 12:13:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:13.963 12:13:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:13.963 12:13:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:14:13.963 00:14:13.963 real 0m4.718s 00:14:13.963 user 0m5.834s 00:14:13.963 sys 0m0.577s 00:14:13.963 12:13:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:13.963 12:13:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.963 12:13:09 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:14:13.963 12:13:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:13.963 12:13:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:13.963 12:13:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:13.963 ************************************ 00:14:13.963 START TEST raid_write_error_test 00:14:13.963 ************************************ 00:14:13.963 12:13:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:14:13.963 12:13:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:14:13.963 12:13:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:14:13.963 12:13:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:14:13.963 12:13:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:13.963 12:13:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:13.963 12:13:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:13.963 12:13:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:13.963 12:13:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:13.963 12:13:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:13.963 12:13:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:13.963 12:13:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:13.963 12:13:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:13.963 12:13:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:13.963 12:13:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:13.963 12:13:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:13.963 12:13:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:13.963 12:13:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:13.963 12:13:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:13.963 12:13:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:13.963 12:13:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:13.963 12:13:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:13.963 12:13:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:14:13.963 12:13:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:14:13.963 12:13:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:14:13.963 12:13:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:13.963 12:13:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.SZrYQveBqx 00:14:13.963 12:13:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67320 00:14:13.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:13.963 12:13:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67320 00:14:13.963 12:13:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67320 ']' 00:14:13.963 12:13:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:13.963 12:13:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:13.963 12:13:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:13.963 12:13:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:13.963 12:13:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:13.963 12:13:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.963 [2024-11-25 12:13:09.782302] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:14:13.963 [2024-11-25 12:13:09.782479] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67320 ] 00:14:13.963 [2024-11-25 12:13:09.960313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.275 [2024-11-25 12:13:10.090835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.275 [2024-11-25 12:13:10.292698] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:14.275 [2024-11-25 12:13:10.292783] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:14.865 12:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:14.865 12:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:14:14.865 12:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:14.865 12:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:14.865 12:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.865 12:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.865 BaseBdev1_malloc 00:14:14.865 12:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.865 12:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:14.865 12:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.865 12:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.865 true 00:14:14.865 12:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.865 12:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:14.865 12:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.865 12:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.865 [2024-11-25 12:13:10.838700] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:14.865 [2024-11-25 12:13:10.838918] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:14.865 [2024-11-25 12:13:10.839079] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:14.865 [2024-11-25 12:13:10.839206] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:14.865 [2024-11-25 12:13:10.842203] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:14.865 [2024-11-25 12:13:10.842385] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:14.865 BaseBdev1 00:14:14.865 12:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.865 12:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:14.865 12:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:14.865 12:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.865 12:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.865 BaseBdev2_malloc 00:14:14.865 12:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.865 12:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:14.865 12:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.865 12:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.865 true 00:14:14.865 12:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.865 12:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:14.865 12:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.865 12:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.865 [2024-11-25 12:13:10.906651] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:14.865 [2024-11-25 12:13:10.906727] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:14.865 [2024-11-25 12:13:10.906761] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:14.865 [2024-11-25 12:13:10.906780] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:14.865 [2024-11-25 12:13:10.909608] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:14.865 [2024-11-25 12:13:10.909656] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:14.865 BaseBdev2 00:14:14.865 12:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.865 12:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:14.865 12:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:14.865 12:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.865 12:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.122 BaseBdev3_malloc 00:14:15.122 12:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.122 12:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:15.122 12:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.122 12:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.122 true 00:14:15.122 12:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.122 12:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:15.122 12:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.123 12:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.123 [2024-11-25 12:13:10.984030] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:15.123 [2024-11-25 12:13:10.984105] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:15.123 [2024-11-25 12:13:10.984142] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:15.123 [2024-11-25 12:13:10.984161] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:15.123 [2024-11-25 12:13:10.987079] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:15.123 [2024-11-25 12:13:10.987130] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:15.123 BaseBdev3 00:14:15.123 12:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.123 12:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:14:15.123 12:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.123 12:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.123 [2024-11-25 12:13:10.996135] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:15.123 [2024-11-25 12:13:10.998704] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:15.123 [2024-11-25 12:13:10.998942] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:15.123 [2024-11-25 12:13:10.999230] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:15.123 [2024-11-25 12:13:10.999250] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:15.123 [2024-11-25 12:13:10.999621] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:14:15.123 [2024-11-25 12:13:10.999840] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:15.123 [2024-11-25 12:13:10.999863] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:14:15.123 [2024-11-25 12:13:11.000118] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:15.123 12:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.123 12:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:14:15.123 12:13:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:15.123 12:13:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:15.123 12:13:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:15.123 12:13:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:15.123 12:13:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:15.123 12:13:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.123 12:13:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.123 12:13:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.123 12:13:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.123 12:13:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.123 12:13:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.123 12:13:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.123 12:13:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.123 12:13:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.123 12:13:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.123 "name": "raid_bdev1", 00:14:15.123 "uuid": "d71255b5-9e7c-4fbd-ad44-d612ffd32716", 00:14:15.123 "strip_size_kb": 64, 00:14:15.123 "state": "online", 00:14:15.123 "raid_level": "concat", 00:14:15.123 "superblock": true, 00:14:15.123 "num_base_bdevs": 3, 00:14:15.123 "num_base_bdevs_discovered": 3, 00:14:15.123 "num_base_bdevs_operational": 3, 00:14:15.123 "base_bdevs_list": [ 00:14:15.123 { 00:14:15.123 "name": "BaseBdev1", 00:14:15.123 "uuid": "be5add3d-50fb-5175-a027-9c3e4cddf1f9", 00:14:15.123 "is_configured": true, 00:14:15.123 "data_offset": 2048, 00:14:15.123 "data_size": 63488 00:14:15.123 }, 00:14:15.123 { 00:14:15.123 "name": "BaseBdev2", 00:14:15.123 "uuid": "fa18e42b-fbf7-5cb1-b1af-fc3ebb766b94", 00:14:15.123 "is_configured": true, 00:14:15.123 "data_offset": 2048, 00:14:15.123 "data_size": 63488 00:14:15.123 }, 00:14:15.123 { 00:14:15.123 "name": "BaseBdev3", 00:14:15.123 "uuid": "27246fb1-809b-5c98-b68d-399f09224525", 00:14:15.123 "is_configured": true, 00:14:15.123 "data_offset": 2048, 00:14:15.123 "data_size": 63488 00:14:15.123 } 00:14:15.123 ] 00:14:15.123 }' 00:14:15.123 12:13:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.123 12:13:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.686 12:13:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:15.686 12:13:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:15.686 [2024-11-25 12:13:11.625747] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:14:16.619 12:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:16.619 12:13:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.619 12:13:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.619 12:13:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.619 12:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:16.619 12:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:14:16.619 12:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:14:16.619 12:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:14:16.619 12:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:16.619 12:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:16.619 12:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:16.619 12:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:16.619 12:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:16.619 12:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.619 12:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.619 12:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.619 12:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.619 12:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.619 12:13:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.619 12:13:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.619 12:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.619 12:13:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.619 12:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.619 "name": "raid_bdev1", 00:14:16.619 "uuid": "d71255b5-9e7c-4fbd-ad44-d612ffd32716", 00:14:16.619 "strip_size_kb": 64, 00:14:16.619 "state": "online", 00:14:16.619 "raid_level": "concat", 00:14:16.619 "superblock": true, 00:14:16.619 "num_base_bdevs": 3, 00:14:16.619 "num_base_bdevs_discovered": 3, 00:14:16.619 "num_base_bdevs_operational": 3, 00:14:16.619 "base_bdevs_list": [ 00:14:16.619 { 00:14:16.619 "name": "BaseBdev1", 00:14:16.619 "uuid": "be5add3d-50fb-5175-a027-9c3e4cddf1f9", 00:14:16.619 "is_configured": true, 00:14:16.619 "data_offset": 2048, 00:14:16.619 "data_size": 63488 00:14:16.619 }, 00:14:16.619 { 00:14:16.619 "name": "BaseBdev2", 00:14:16.619 "uuid": "fa18e42b-fbf7-5cb1-b1af-fc3ebb766b94", 00:14:16.619 "is_configured": true, 00:14:16.619 "data_offset": 2048, 00:14:16.619 "data_size": 63488 00:14:16.619 }, 00:14:16.619 { 00:14:16.619 "name": "BaseBdev3", 00:14:16.619 "uuid": "27246fb1-809b-5c98-b68d-399f09224525", 00:14:16.619 "is_configured": true, 00:14:16.619 "data_offset": 2048, 00:14:16.619 "data_size": 63488 00:14:16.619 } 00:14:16.619 ] 00:14:16.619 }' 00:14:16.619 12:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.619 12:13:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.186 12:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:17.186 12:13:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.186 12:13:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.186 [2024-11-25 12:13:12.984696] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:17.186 [2024-11-25 12:13:12.984737] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:17.186 [2024-11-25 12:13:12.988041] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:17.186 [2024-11-25 12:13:12.988250] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:17.186 [2024-11-25 12:13:12.988324] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:17.186 [2024-11-25 12:13:12.988359] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:14:17.186 { 00:14:17.186 "results": [ 00:14:17.186 { 00:14:17.186 "job": "raid_bdev1", 00:14:17.186 "core_mask": "0x1", 00:14:17.186 "workload": "randrw", 00:14:17.186 "percentage": 50, 00:14:17.186 "status": "finished", 00:14:17.186 "queue_depth": 1, 00:14:17.186 "io_size": 131072, 00:14:17.186 "runtime": 1.356481, 00:14:17.186 "iops": 10274.37907349974, 00:14:17.186 "mibps": 1284.2973841874675, 00:14:17.187 "io_failed": 1, 00:14:17.187 "io_timeout": 0, 00:14:17.187 "avg_latency_us": 136.05021302130214, 00:14:17.187 "min_latency_us": 43.054545454545455, 00:14:17.187 "max_latency_us": 1899.0545454545454 00:14:17.187 } 00:14:17.187 ], 00:14:17.187 "core_count": 1 00:14:17.187 } 00:14:17.187 12:13:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.187 12:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67320 00:14:17.187 12:13:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67320 ']' 00:14:17.187 12:13:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67320 00:14:17.187 12:13:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:14:17.187 12:13:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:17.187 12:13:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67320 00:14:17.187 killing process with pid 67320 00:14:17.187 12:13:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:17.187 12:13:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:17.187 12:13:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67320' 00:14:17.187 12:13:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67320 00:14:17.187 [2024-11-25 12:13:13.024088] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:17.187 12:13:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67320 00:14:17.187 [2024-11-25 12:13:13.263942] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:18.567 12:13:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.SZrYQveBqx 00:14:18.567 12:13:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:18.567 12:13:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:18.567 12:13:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:14:18.567 12:13:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:14:18.567 12:13:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:18.567 12:13:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:18.567 12:13:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:14:18.567 00:14:18.567 real 0m4.677s 00:14:18.567 user 0m5.755s 00:14:18.567 sys 0m0.574s 00:14:18.567 12:13:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:18.567 12:13:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.567 ************************************ 00:14:18.567 END TEST raid_write_error_test 00:14:18.567 ************************************ 00:14:18.567 12:13:14 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:14:18.567 12:13:14 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:14:18.567 12:13:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:18.567 12:13:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:18.567 12:13:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:18.567 ************************************ 00:14:18.567 START TEST raid_state_function_test 00:14:18.567 ************************************ 00:14:18.567 12:13:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:14:18.567 12:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:14:18.567 12:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:18.567 12:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:18.567 12:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:18.567 12:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:18.567 12:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:18.567 12:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:18.567 12:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:18.567 12:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:18.567 12:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:18.567 12:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:18.567 12:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:18.567 12:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:18.567 12:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:18.567 12:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:18.567 12:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:18.567 12:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:18.567 12:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:18.567 12:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:18.567 12:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:18.567 12:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:18.567 12:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:14:18.568 12:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:14:18.568 12:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:18.568 12:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:18.568 Process raid pid: 67464 00:14:18.568 12:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67464 00:14:18.568 12:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:18.568 12:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67464' 00:14:18.568 12:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67464 00:14:18.568 12:13:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67464 ']' 00:14:18.568 12:13:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:18.568 12:13:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:18.568 12:13:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:18.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:18.568 12:13:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:18.568 12:13:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.568 [2024-11-25 12:13:14.531399] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:14:18.568 [2024-11-25 12:13:14.531810] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:18.826 [2024-11-25 12:13:14.714713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.826 [2024-11-25 12:13:14.875205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:19.085 [2024-11-25 12:13:15.094696] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:19.085 [2024-11-25 12:13:15.094999] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:19.342 12:13:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:19.342 12:13:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:14:19.342 12:13:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:19.342 12:13:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.342 12:13:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.600 [2024-11-25 12:13:15.437308] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:19.600 [2024-11-25 12:13:15.437390] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:19.600 [2024-11-25 12:13:15.437409] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:19.600 [2024-11-25 12:13:15.437425] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:19.600 [2024-11-25 12:13:15.437435] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:19.600 [2024-11-25 12:13:15.437449] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:19.600 12:13:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.600 12:13:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:19.600 12:13:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:19.600 12:13:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:19.600 12:13:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:19.600 12:13:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:19.600 12:13:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:19.600 12:13:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.600 12:13:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.600 12:13:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.600 12:13:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.600 12:13:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.600 12:13:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.600 12:13:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.600 12:13:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:19.600 12:13:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.600 12:13:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.600 "name": "Existed_Raid", 00:14:19.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.600 "strip_size_kb": 0, 00:14:19.600 "state": "configuring", 00:14:19.600 "raid_level": "raid1", 00:14:19.600 "superblock": false, 00:14:19.600 "num_base_bdevs": 3, 00:14:19.600 "num_base_bdevs_discovered": 0, 00:14:19.600 "num_base_bdevs_operational": 3, 00:14:19.600 "base_bdevs_list": [ 00:14:19.600 { 00:14:19.600 "name": "BaseBdev1", 00:14:19.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.601 "is_configured": false, 00:14:19.601 "data_offset": 0, 00:14:19.601 "data_size": 0 00:14:19.601 }, 00:14:19.601 { 00:14:19.601 "name": "BaseBdev2", 00:14:19.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.601 "is_configured": false, 00:14:19.601 "data_offset": 0, 00:14:19.601 "data_size": 0 00:14:19.601 }, 00:14:19.601 { 00:14:19.601 "name": "BaseBdev3", 00:14:19.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.601 "is_configured": false, 00:14:19.601 "data_offset": 0, 00:14:19.601 "data_size": 0 00:14:19.601 } 00:14:19.601 ] 00:14:19.601 }' 00:14:19.601 12:13:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.601 12:13:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.168 12:13:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:20.168 12:13:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.168 12:13:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.168 [2024-11-25 12:13:15.997427] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:20.168 [2024-11-25 12:13:15.997475] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:20.168 12:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.168 12:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:20.168 12:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.168 12:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.168 [2024-11-25 12:13:16.005412] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:20.168 [2024-11-25 12:13:16.005602] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:20.168 [2024-11-25 12:13:16.005726] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:20.168 [2024-11-25 12:13:16.005788] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:20.168 [2024-11-25 12:13:16.005987] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:20.168 [2024-11-25 12:13:16.006065] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:20.168 12:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.168 12:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:20.168 12:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.168 12:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.168 [2024-11-25 12:13:16.049903] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:20.168 BaseBdev1 00:14:20.168 12:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.168 12:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:20.168 12:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:20.168 12:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:20.168 12:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:20.168 12:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:20.168 12:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:20.168 12:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:20.168 12:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.168 12:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.168 12:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.168 12:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:20.168 12:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.168 12:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.168 [ 00:14:20.168 { 00:14:20.168 "name": "BaseBdev1", 00:14:20.168 "aliases": [ 00:14:20.168 "8f80df21-0cf1-43fe-8104-51f256ab65fc" 00:14:20.168 ], 00:14:20.169 "product_name": "Malloc disk", 00:14:20.169 "block_size": 512, 00:14:20.169 "num_blocks": 65536, 00:14:20.169 "uuid": "8f80df21-0cf1-43fe-8104-51f256ab65fc", 00:14:20.169 "assigned_rate_limits": { 00:14:20.169 "rw_ios_per_sec": 0, 00:14:20.169 "rw_mbytes_per_sec": 0, 00:14:20.169 "r_mbytes_per_sec": 0, 00:14:20.169 "w_mbytes_per_sec": 0 00:14:20.169 }, 00:14:20.169 "claimed": true, 00:14:20.169 "claim_type": "exclusive_write", 00:14:20.169 "zoned": false, 00:14:20.169 "supported_io_types": { 00:14:20.169 "read": true, 00:14:20.169 "write": true, 00:14:20.169 "unmap": true, 00:14:20.169 "flush": true, 00:14:20.169 "reset": true, 00:14:20.169 "nvme_admin": false, 00:14:20.169 "nvme_io": false, 00:14:20.169 "nvme_io_md": false, 00:14:20.169 "write_zeroes": true, 00:14:20.169 "zcopy": true, 00:14:20.169 "get_zone_info": false, 00:14:20.169 "zone_management": false, 00:14:20.169 "zone_append": false, 00:14:20.169 "compare": false, 00:14:20.169 "compare_and_write": false, 00:14:20.169 "abort": true, 00:14:20.169 "seek_hole": false, 00:14:20.169 "seek_data": false, 00:14:20.169 "copy": true, 00:14:20.169 "nvme_iov_md": false 00:14:20.169 }, 00:14:20.169 "memory_domains": [ 00:14:20.169 { 00:14:20.169 "dma_device_id": "system", 00:14:20.169 "dma_device_type": 1 00:14:20.169 }, 00:14:20.169 { 00:14:20.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:20.169 "dma_device_type": 2 00:14:20.169 } 00:14:20.169 ], 00:14:20.169 "driver_specific": {} 00:14:20.169 } 00:14:20.169 ] 00:14:20.169 12:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.169 12:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:20.169 12:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:20.169 12:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:20.169 12:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:20.169 12:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:20.169 12:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:20.169 12:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:20.169 12:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.169 12:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.169 12:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.169 12:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.169 12:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.169 12:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:20.169 12:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.169 12:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.169 12:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.169 12:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.169 "name": "Existed_Raid", 00:14:20.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.169 "strip_size_kb": 0, 00:14:20.169 "state": "configuring", 00:14:20.169 "raid_level": "raid1", 00:14:20.169 "superblock": false, 00:14:20.169 "num_base_bdevs": 3, 00:14:20.169 "num_base_bdevs_discovered": 1, 00:14:20.169 "num_base_bdevs_operational": 3, 00:14:20.169 "base_bdevs_list": [ 00:14:20.169 { 00:14:20.169 "name": "BaseBdev1", 00:14:20.169 "uuid": "8f80df21-0cf1-43fe-8104-51f256ab65fc", 00:14:20.169 "is_configured": true, 00:14:20.169 "data_offset": 0, 00:14:20.169 "data_size": 65536 00:14:20.169 }, 00:14:20.169 { 00:14:20.169 "name": "BaseBdev2", 00:14:20.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.169 "is_configured": false, 00:14:20.169 "data_offset": 0, 00:14:20.169 "data_size": 0 00:14:20.169 }, 00:14:20.169 { 00:14:20.169 "name": "BaseBdev3", 00:14:20.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.169 "is_configured": false, 00:14:20.169 "data_offset": 0, 00:14:20.169 "data_size": 0 00:14:20.169 } 00:14:20.169 ] 00:14:20.169 }' 00:14:20.169 12:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.169 12:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.736 12:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:20.736 12:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.736 12:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.736 [2024-11-25 12:13:16.562125] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:20.736 [2024-11-25 12:13:16.562191] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:20.736 12:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.736 12:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:20.736 12:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.736 12:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.736 [2024-11-25 12:13:16.570186] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:20.736 [2024-11-25 12:13:16.572799] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:20.736 [2024-11-25 12:13:16.572976] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:20.736 [2024-11-25 12:13:16.573107] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:20.736 [2024-11-25 12:13:16.573167] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:20.736 12:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.736 12:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:20.736 12:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:20.736 12:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:20.736 12:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:20.736 12:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:20.736 12:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:20.736 12:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:20.736 12:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:20.736 12:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.736 12:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.736 12:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.736 12:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.736 12:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.736 12:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.736 12:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:20.736 12:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.736 12:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.736 12:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.736 "name": "Existed_Raid", 00:14:20.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.736 "strip_size_kb": 0, 00:14:20.736 "state": "configuring", 00:14:20.736 "raid_level": "raid1", 00:14:20.736 "superblock": false, 00:14:20.736 "num_base_bdevs": 3, 00:14:20.736 "num_base_bdevs_discovered": 1, 00:14:20.736 "num_base_bdevs_operational": 3, 00:14:20.736 "base_bdevs_list": [ 00:14:20.736 { 00:14:20.736 "name": "BaseBdev1", 00:14:20.736 "uuid": "8f80df21-0cf1-43fe-8104-51f256ab65fc", 00:14:20.736 "is_configured": true, 00:14:20.736 "data_offset": 0, 00:14:20.736 "data_size": 65536 00:14:20.736 }, 00:14:20.736 { 00:14:20.736 "name": "BaseBdev2", 00:14:20.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.736 "is_configured": false, 00:14:20.736 "data_offset": 0, 00:14:20.736 "data_size": 0 00:14:20.736 }, 00:14:20.736 { 00:14:20.736 "name": "BaseBdev3", 00:14:20.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.736 "is_configured": false, 00:14:20.736 "data_offset": 0, 00:14:20.736 "data_size": 0 00:14:20.736 } 00:14:20.736 ] 00:14:20.736 }' 00:14:20.736 12:13:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.736 12:13:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.304 12:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:21.304 12:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.304 12:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.304 [2024-11-25 12:13:17.136532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:21.304 BaseBdev2 00:14:21.304 12:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.304 12:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:21.304 12:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:21.304 12:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:21.304 12:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:21.304 12:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:21.304 12:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:21.304 12:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:21.304 12:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.304 12:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.304 12:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.304 12:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:21.304 12:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.304 12:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.304 [ 00:14:21.304 { 00:14:21.304 "name": "BaseBdev2", 00:14:21.304 "aliases": [ 00:14:21.304 "886a4593-eb37-4c45-83e6-2cbc69e70953" 00:14:21.304 ], 00:14:21.304 "product_name": "Malloc disk", 00:14:21.304 "block_size": 512, 00:14:21.304 "num_blocks": 65536, 00:14:21.304 "uuid": "886a4593-eb37-4c45-83e6-2cbc69e70953", 00:14:21.304 "assigned_rate_limits": { 00:14:21.304 "rw_ios_per_sec": 0, 00:14:21.304 "rw_mbytes_per_sec": 0, 00:14:21.304 "r_mbytes_per_sec": 0, 00:14:21.304 "w_mbytes_per_sec": 0 00:14:21.304 }, 00:14:21.304 "claimed": true, 00:14:21.304 "claim_type": "exclusive_write", 00:14:21.304 "zoned": false, 00:14:21.304 "supported_io_types": { 00:14:21.304 "read": true, 00:14:21.304 "write": true, 00:14:21.304 "unmap": true, 00:14:21.304 "flush": true, 00:14:21.304 "reset": true, 00:14:21.304 "nvme_admin": false, 00:14:21.304 "nvme_io": false, 00:14:21.304 "nvme_io_md": false, 00:14:21.304 "write_zeroes": true, 00:14:21.304 "zcopy": true, 00:14:21.304 "get_zone_info": false, 00:14:21.304 "zone_management": false, 00:14:21.304 "zone_append": false, 00:14:21.304 "compare": false, 00:14:21.304 "compare_and_write": false, 00:14:21.304 "abort": true, 00:14:21.304 "seek_hole": false, 00:14:21.304 "seek_data": false, 00:14:21.304 "copy": true, 00:14:21.304 "nvme_iov_md": false 00:14:21.304 }, 00:14:21.304 "memory_domains": [ 00:14:21.304 { 00:14:21.304 "dma_device_id": "system", 00:14:21.304 "dma_device_type": 1 00:14:21.304 }, 00:14:21.304 { 00:14:21.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:21.304 "dma_device_type": 2 00:14:21.304 } 00:14:21.304 ], 00:14:21.304 "driver_specific": {} 00:14:21.304 } 00:14:21.304 ] 00:14:21.304 12:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.304 12:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:21.304 12:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:21.304 12:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:21.304 12:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:21.304 12:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:21.304 12:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:21.304 12:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:21.304 12:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:21.304 12:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:21.304 12:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.304 12:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.304 12:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.304 12:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.304 12:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.304 12:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.304 12:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.304 12:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:21.304 12:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.304 12:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.304 "name": "Existed_Raid", 00:14:21.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.304 "strip_size_kb": 0, 00:14:21.304 "state": "configuring", 00:14:21.304 "raid_level": "raid1", 00:14:21.304 "superblock": false, 00:14:21.304 "num_base_bdevs": 3, 00:14:21.304 "num_base_bdevs_discovered": 2, 00:14:21.304 "num_base_bdevs_operational": 3, 00:14:21.304 "base_bdevs_list": [ 00:14:21.304 { 00:14:21.304 "name": "BaseBdev1", 00:14:21.304 "uuid": "8f80df21-0cf1-43fe-8104-51f256ab65fc", 00:14:21.304 "is_configured": true, 00:14:21.304 "data_offset": 0, 00:14:21.304 "data_size": 65536 00:14:21.304 }, 00:14:21.304 { 00:14:21.304 "name": "BaseBdev2", 00:14:21.304 "uuid": "886a4593-eb37-4c45-83e6-2cbc69e70953", 00:14:21.304 "is_configured": true, 00:14:21.304 "data_offset": 0, 00:14:21.304 "data_size": 65536 00:14:21.304 }, 00:14:21.304 { 00:14:21.304 "name": "BaseBdev3", 00:14:21.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.304 "is_configured": false, 00:14:21.304 "data_offset": 0, 00:14:21.304 "data_size": 0 00:14:21.304 } 00:14:21.304 ] 00:14:21.304 }' 00:14:21.304 12:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.304 12:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.872 12:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:21.872 12:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.873 12:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.873 [2024-11-25 12:13:17.714425] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:21.873 [2024-11-25 12:13:17.714500] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:21.873 [2024-11-25 12:13:17.714522] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:21.873 [2024-11-25 12:13:17.714880] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:21.873 [2024-11-25 12:13:17.715106] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:21.873 [2024-11-25 12:13:17.715123] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:21.873 [2024-11-25 12:13:17.715476] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:21.873 BaseBdev3 00:14:21.873 12:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.873 12:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:21.873 12:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:21.873 12:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:21.873 12:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:21.873 12:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:21.873 12:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:21.873 12:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:21.873 12:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.873 12:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.873 12:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.873 12:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:21.873 12:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.873 12:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.873 [ 00:14:21.873 { 00:14:21.873 "name": "BaseBdev3", 00:14:21.873 "aliases": [ 00:14:21.873 "3bcce69d-34ed-47db-a780-fc6efb711c25" 00:14:21.873 ], 00:14:21.873 "product_name": "Malloc disk", 00:14:21.873 "block_size": 512, 00:14:21.873 "num_blocks": 65536, 00:14:21.873 "uuid": "3bcce69d-34ed-47db-a780-fc6efb711c25", 00:14:21.873 "assigned_rate_limits": { 00:14:21.873 "rw_ios_per_sec": 0, 00:14:21.873 "rw_mbytes_per_sec": 0, 00:14:21.873 "r_mbytes_per_sec": 0, 00:14:21.873 "w_mbytes_per_sec": 0 00:14:21.873 }, 00:14:21.873 "claimed": true, 00:14:21.873 "claim_type": "exclusive_write", 00:14:21.873 "zoned": false, 00:14:21.873 "supported_io_types": { 00:14:21.873 "read": true, 00:14:21.873 "write": true, 00:14:21.873 "unmap": true, 00:14:21.873 "flush": true, 00:14:21.873 "reset": true, 00:14:21.873 "nvme_admin": false, 00:14:21.873 "nvme_io": false, 00:14:21.873 "nvme_io_md": false, 00:14:21.873 "write_zeroes": true, 00:14:21.873 "zcopy": true, 00:14:21.873 "get_zone_info": false, 00:14:21.873 "zone_management": false, 00:14:21.873 "zone_append": false, 00:14:21.873 "compare": false, 00:14:21.873 "compare_and_write": false, 00:14:21.873 "abort": true, 00:14:21.873 "seek_hole": false, 00:14:21.873 "seek_data": false, 00:14:21.873 "copy": true, 00:14:21.873 "nvme_iov_md": false 00:14:21.873 }, 00:14:21.873 "memory_domains": [ 00:14:21.873 { 00:14:21.873 "dma_device_id": "system", 00:14:21.873 "dma_device_type": 1 00:14:21.873 }, 00:14:21.873 { 00:14:21.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:21.873 "dma_device_type": 2 00:14:21.873 } 00:14:21.873 ], 00:14:21.873 "driver_specific": {} 00:14:21.873 } 00:14:21.873 ] 00:14:21.873 12:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.873 12:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:21.873 12:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:21.873 12:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:21.873 12:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:14:21.873 12:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:21.873 12:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:21.873 12:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:21.873 12:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:21.873 12:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:21.873 12:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.873 12:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.873 12:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.873 12:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.873 12:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:21.873 12:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.873 12:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.873 12:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.873 12:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.873 12:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.873 "name": "Existed_Raid", 00:14:21.873 "uuid": "d415f2ee-42f3-422f-9277-204fb9a9da52", 00:14:21.873 "strip_size_kb": 0, 00:14:21.873 "state": "online", 00:14:21.873 "raid_level": "raid1", 00:14:21.873 "superblock": false, 00:14:21.873 "num_base_bdevs": 3, 00:14:21.873 "num_base_bdevs_discovered": 3, 00:14:21.873 "num_base_bdevs_operational": 3, 00:14:21.873 "base_bdevs_list": [ 00:14:21.873 { 00:14:21.873 "name": "BaseBdev1", 00:14:21.873 "uuid": "8f80df21-0cf1-43fe-8104-51f256ab65fc", 00:14:21.873 "is_configured": true, 00:14:21.873 "data_offset": 0, 00:14:21.873 "data_size": 65536 00:14:21.873 }, 00:14:21.873 { 00:14:21.873 "name": "BaseBdev2", 00:14:21.873 "uuid": "886a4593-eb37-4c45-83e6-2cbc69e70953", 00:14:21.873 "is_configured": true, 00:14:21.873 "data_offset": 0, 00:14:21.873 "data_size": 65536 00:14:21.873 }, 00:14:21.873 { 00:14:21.873 "name": "BaseBdev3", 00:14:21.873 "uuid": "3bcce69d-34ed-47db-a780-fc6efb711c25", 00:14:21.873 "is_configured": true, 00:14:21.873 "data_offset": 0, 00:14:21.873 "data_size": 65536 00:14:21.873 } 00:14:21.873 ] 00:14:21.873 }' 00:14:21.873 12:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.873 12:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.440 12:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:22.440 12:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:22.440 12:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:22.440 12:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:22.440 12:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:22.440 12:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:22.440 12:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:22.440 12:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:22.440 12:13:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.440 12:13:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.440 [2024-11-25 12:13:18.267032] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:22.440 12:13:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.440 12:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:22.440 "name": "Existed_Raid", 00:14:22.440 "aliases": [ 00:14:22.440 "d415f2ee-42f3-422f-9277-204fb9a9da52" 00:14:22.440 ], 00:14:22.440 "product_name": "Raid Volume", 00:14:22.440 "block_size": 512, 00:14:22.440 "num_blocks": 65536, 00:14:22.440 "uuid": "d415f2ee-42f3-422f-9277-204fb9a9da52", 00:14:22.440 "assigned_rate_limits": { 00:14:22.440 "rw_ios_per_sec": 0, 00:14:22.440 "rw_mbytes_per_sec": 0, 00:14:22.440 "r_mbytes_per_sec": 0, 00:14:22.440 "w_mbytes_per_sec": 0 00:14:22.440 }, 00:14:22.440 "claimed": false, 00:14:22.440 "zoned": false, 00:14:22.440 "supported_io_types": { 00:14:22.440 "read": true, 00:14:22.440 "write": true, 00:14:22.440 "unmap": false, 00:14:22.440 "flush": false, 00:14:22.440 "reset": true, 00:14:22.440 "nvme_admin": false, 00:14:22.440 "nvme_io": false, 00:14:22.440 "nvme_io_md": false, 00:14:22.440 "write_zeroes": true, 00:14:22.440 "zcopy": false, 00:14:22.440 "get_zone_info": false, 00:14:22.440 "zone_management": false, 00:14:22.440 "zone_append": false, 00:14:22.440 "compare": false, 00:14:22.440 "compare_and_write": false, 00:14:22.440 "abort": false, 00:14:22.440 "seek_hole": false, 00:14:22.440 "seek_data": false, 00:14:22.440 "copy": false, 00:14:22.440 "nvme_iov_md": false 00:14:22.440 }, 00:14:22.440 "memory_domains": [ 00:14:22.440 { 00:14:22.440 "dma_device_id": "system", 00:14:22.440 "dma_device_type": 1 00:14:22.440 }, 00:14:22.440 { 00:14:22.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.440 "dma_device_type": 2 00:14:22.441 }, 00:14:22.441 { 00:14:22.441 "dma_device_id": "system", 00:14:22.441 "dma_device_type": 1 00:14:22.441 }, 00:14:22.441 { 00:14:22.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.441 "dma_device_type": 2 00:14:22.441 }, 00:14:22.441 { 00:14:22.441 "dma_device_id": "system", 00:14:22.441 "dma_device_type": 1 00:14:22.441 }, 00:14:22.441 { 00:14:22.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.441 "dma_device_type": 2 00:14:22.441 } 00:14:22.441 ], 00:14:22.441 "driver_specific": { 00:14:22.441 "raid": { 00:14:22.441 "uuid": "d415f2ee-42f3-422f-9277-204fb9a9da52", 00:14:22.441 "strip_size_kb": 0, 00:14:22.441 "state": "online", 00:14:22.441 "raid_level": "raid1", 00:14:22.441 "superblock": false, 00:14:22.441 "num_base_bdevs": 3, 00:14:22.441 "num_base_bdevs_discovered": 3, 00:14:22.441 "num_base_bdevs_operational": 3, 00:14:22.441 "base_bdevs_list": [ 00:14:22.441 { 00:14:22.441 "name": "BaseBdev1", 00:14:22.441 "uuid": "8f80df21-0cf1-43fe-8104-51f256ab65fc", 00:14:22.441 "is_configured": true, 00:14:22.441 "data_offset": 0, 00:14:22.441 "data_size": 65536 00:14:22.441 }, 00:14:22.441 { 00:14:22.441 "name": "BaseBdev2", 00:14:22.441 "uuid": "886a4593-eb37-4c45-83e6-2cbc69e70953", 00:14:22.441 "is_configured": true, 00:14:22.441 "data_offset": 0, 00:14:22.441 "data_size": 65536 00:14:22.441 }, 00:14:22.441 { 00:14:22.441 "name": "BaseBdev3", 00:14:22.441 "uuid": "3bcce69d-34ed-47db-a780-fc6efb711c25", 00:14:22.441 "is_configured": true, 00:14:22.441 "data_offset": 0, 00:14:22.441 "data_size": 65536 00:14:22.441 } 00:14:22.441 ] 00:14:22.441 } 00:14:22.441 } 00:14:22.441 }' 00:14:22.441 12:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:22.441 12:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:22.441 BaseBdev2 00:14:22.441 BaseBdev3' 00:14:22.441 12:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:22.441 12:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:22.441 12:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:22.441 12:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:22.441 12:13:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.441 12:13:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.441 12:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:22.441 12:13:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.441 12:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:22.441 12:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:22.441 12:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:22.441 12:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:22.441 12:13:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.441 12:13:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.441 12:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:22.441 12:13:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.441 12:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:22.441 12:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:22.441 12:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:22.441 12:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:22.441 12:13:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.441 12:13:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.441 12:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:22.735 12:13:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.735 12:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:22.735 12:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:22.735 12:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:22.735 12:13:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.735 12:13:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.735 [2024-11-25 12:13:18.578816] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:22.735 12:13:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.735 12:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:22.735 12:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:14:22.735 12:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:22.735 12:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:22.735 12:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:22.735 12:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:14:22.735 12:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:22.735 12:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:22.735 12:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:22.735 12:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:22.735 12:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:22.735 12:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.735 12:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.735 12:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.735 12:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.735 12:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.735 12:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:22.735 12:13:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.735 12:13:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.735 12:13:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.735 12:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.735 "name": "Existed_Raid", 00:14:22.735 "uuid": "d415f2ee-42f3-422f-9277-204fb9a9da52", 00:14:22.735 "strip_size_kb": 0, 00:14:22.735 "state": "online", 00:14:22.735 "raid_level": "raid1", 00:14:22.735 "superblock": false, 00:14:22.735 "num_base_bdevs": 3, 00:14:22.735 "num_base_bdevs_discovered": 2, 00:14:22.735 "num_base_bdevs_operational": 2, 00:14:22.735 "base_bdevs_list": [ 00:14:22.735 { 00:14:22.735 "name": null, 00:14:22.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.735 "is_configured": false, 00:14:22.735 "data_offset": 0, 00:14:22.735 "data_size": 65536 00:14:22.735 }, 00:14:22.735 { 00:14:22.735 "name": "BaseBdev2", 00:14:22.735 "uuid": "886a4593-eb37-4c45-83e6-2cbc69e70953", 00:14:22.735 "is_configured": true, 00:14:22.735 "data_offset": 0, 00:14:22.735 "data_size": 65536 00:14:22.736 }, 00:14:22.736 { 00:14:22.736 "name": "BaseBdev3", 00:14:22.736 "uuid": "3bcce69d-34ed-47db-a780-fc6efb711c25", 00:14:22.736 "is_configured": true, 00:14:22.736 "data_offset": 0, 00:14:22.736 "data_size": 65536 00:14:22.736 } 00:14:22.736 ] 00:14:22.736 }' 00:14:22.736 12:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.736 12:13:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.311 12:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:23.311 12:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:23.311 12:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.311 12:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.311 12:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:23.311 12:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.312 12:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.312 12:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:23.312 12:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:23.312 12:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:23.312 12:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.312 12:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.312 [2024-11-25 12:13:19.246125] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:23.312 12:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.312 12:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:23.312 12:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:23.312 12:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.312 12:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.312 12:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:23.312 12:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.312 12:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.312 12:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:23.312 12:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:23.312 12:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:23.312 12:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.312 12:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.570 [2024-11-25 12:13:19.403498] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:23.570 [2024-11-25 12:13:19.403789] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:23.570 [2024-11-25 12:13:19.490711] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:23.570 [2024-11-25 12:13:19.490981] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:23.570 [2024-11-25 12:13:19.491018] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:23.570 12:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.570 12:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:23.570 12:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:23.570 12:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.570 12:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.570 12:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.570 12:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:23.570 12:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.570 12:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:23.570 12:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:23.570 12:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:23.570 12:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:23.570 12:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:23.570 12:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:23.570 12:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.570 12:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.570 BaseBdev2 00:14:23.570 12:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.570 12:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:23.570 12:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:23.570 12:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:23.570 12:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:23.570 12:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:23.570 12:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:23.570 12:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:23.570 12:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.570 12:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.570 12:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.570 12:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:23.570 12:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.570 12:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.570 [ 00:14:23.570 { 00:14:23.570 "name": "BaseBdev2", 00:14:23.570 "aliases": [ 00:14:23.570 "e1824a88-0ed2-4099-911e-3b478112fa86" 00:14:23.570 ], 00:14:23.570 "product_name": "Malloc disk", 00:14:23.570 "block_size": 512, 00:14:23.570 "num_blocks": 65536, 00:14:23.570 "uuid": "e1824a88-0ed2-4099-911e-3b478112fa86", 00:14:23.570 "assigned_rate_limits": { 00:14:23.570 "rw_ios_per_sec": 0, 00:14:23.570 "rw_mbytes_per_sec": 0, 00:14:23.570 "r_mbytes_per_sec": 0, 00:14:23.570 "w_mbytes_per_sec": 0 00:14:23.570 }, 00:14:23.570 "claimed": false, 00:14:23.570 "zoned": false, 00:14:23.570 "supported_io_types": { 00:14:23.570 "read": true, 00:14:23.570 "write": true, 00:14:23.570 "unmap": true, 00:14:23.570 "flush": true, 00:14:23.570 "reset": true, 00:14:23.570 "nvme_admin": false, 00:14:23.570 "nvme_io": false, 00:14:23.570 "nvme_io_md": false, 00:14:23.570 "write_zeroes": true, 00:14:23.570 "zcopy": true, 00:14:23.570 "get_zone_info": false, 00:14:23.570 "zone_management": false, 00:14:23.570 "zone_append": false, 00:14:23.570 "compare": false, 00:14:23.570 "compare_and_write": false, 00:14:23.570 "abort": true, 00:14:23.570 "seek_hole": false, 00:14:23.570 "seek_data": false, 00:14:23.570 "copy": true, 00:14:23.570 "nvme_iov_md": false 00:14:23.570 }, 00:14:23.570 "memory_domains": [ 00:14:23.570 { 00:14:23.570 "dma_device_id": "system", 00:14:23.570 "dma_device_type": 1 00:14:23.570 }, 00:14:23.570 { 00:14:23.570 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.570 "dma_device_type": 2 00:14:23.570 } 00:14:23.570 ], 00:14:23.570 "driver_specific": {} 00:14:23.570 } 00:14:23.570 ] 00:14:23.570 12:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.570 12:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:23.570 12:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:23.570 12:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:23.570 12:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:23.570 12:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.570 12:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.570 BaseBdev3 00:14:23.570 12:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.570 12:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:23.570 12:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:23.570 12:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:23.570 12:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:23.570 12:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:23.570 12:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:23.570 12:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:23.570 12:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.571 12:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.830 12:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.830 12:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:23.830 12:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.830 12:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.830 [ 00:14:23.830 { 00:14:23.830 "name": "BaseBdev3", 00:14:23.830 "aliases": [ 00:14:23.830 "74a51bcc-3cab-4754-a456-4260b61819ee" 00:14:23.830 ], 00:14:23.830 "product_name": "Malloc disk", 00:14:23.830 "block_size": 512, 00:14:23.830 "num_blocks": 65536, 00:14:23.830 "uuid": "74a51bcc-3cab-4754-a456-4260b61819ee", 00:14:23.830 "assigned_rate_limits": { 00:14:23.830 "rw_ios_per_sec": 0, 00:14:23.830 "rw_mbytes_per_sec": 0, 00:14:23.830 "r_mbytes_per_sec": 0, 00:14:23.830 "w_mbytes_per_sec": 0 00:14:23.830 }, 00:14:23.830 "claimed": false, 00:14:23.830 "zoned": false, 00:14:23.830 "supported_io_types": { 00:14:23.830 "read": true, 00:14:23.830 "write": true, 00:14:23.830 "unmap": true, 00:14:23.830 "flush": true, 00:14:23.830 "reset": true, 00:14:23.830 "nvme_admin": false, 00:14:23.830 "nvme_io": false, 00:14:23.830 "nvme_io_md": false, 00:14:23.830 "write_zeroes": true, 00:14:23.830 "zcopy": true, 00:14:23.830 "get_zone_info": false, 00:14:23.830 "zone_management": false, 00:14:23.830 "zone_append": false, 00:14:23.830 "compare": false, 00:14:23.830 "compare_and_write": false, 00:14:23.830 "abort": true, 00:14:23.830 "seek_hole": false, 00:14:23.830 "seek_data": false, 00:14:23.830 "copy": true, 00:14:23.830 "nvme_iov_md": false 00:14:23.830 }, 00:14:23.830 "memory_domains": [ 00:14:23.830 { 00:14:23.830 "dma_device_id": "system", 00:14:23.830 "dma_device_type": 1 00:14:23.830 }, 00:14:23.830 { 00:14:23.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.830 "dma_device_type": 2 00:14:23.830 } 00:14:23.830 ], 00:14:23.830 "driver_specific": {} 00:14:23.830 } 00:14:23.830 ] 00:14:23.830 12:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.830 12:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:23.830 12:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:23.830 12:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:23.830 12:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:23.830 12:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.830 12:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.830 [2024-11-25 12:13:19.690572] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:23.830 [2024-11-25 12:13:19.690635] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:23.830 [2024-11-25 12:13:19.690669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:23.830 [2024-11-25 12:13:19.693120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:23.830 12:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.830 12:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:23.830 12:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:23.830 12:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:23.830 12:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:23.830 12:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:23.830 12:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:23.830 12:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.830 12:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.830 12:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.830 12:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.830 12:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.830 12:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:23.830 12:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.830 12:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.830 12:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.830 12:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.830 "name": "Existed_Raid", 00:14:23.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.830 "strip_size_kb": 0, 00:14:23.830 "state": "configuring", 00:14:23.830 "raid_level": "raid1", 00:14:23.830 "superblock": false, 00:14:23.830 "num_base_bdevs": 3, 00:14:23.830 "num_base_bdevs_discovered": 2, 00:14:23.830 "num_base_bdevs_operational": 3, 00:14:23.830 "base_bdevs_list": [ 00:14:23.830 { 00:14:23.830 "name": "BaseBdev1", 00:14:23.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.830 "is_configured": false, 00:14:23.830 "data_offset": 0, 00:14:23.830 "data_size": 0 00:14:23.830 }, 00:14:23.830 { 00:14:23.830 "name": "BaseBdev2", 00:14:23.830 "uuid": "e1824a88-0ed2-4099-911e-3b478112fa86", 00:14:23.830 "is_configured": true, 00:14:23.830 "data_offset": 0, 00:14:23.830 "data_size": 65536 00:14:23.830 }, 00:14:23.830 { 00:14:23.830 "name": "BaseBdev3", 00:14:23.830 "uuid": "74a51bcc-3cab-4754-a456-4260b61819ee", 00:14:23.830 "is_configured": true, 00:14:23.830 "data_offset": 0, 00:14:23.830 "data_size": 65536 00:14:23.830 } 00:14:23.830 ] 00:14:23.830 }' 00:14:23.830 12:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.830 12:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.398 12:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:24.398 12:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.398 12:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.398 [2024-11-25 12:13:20.214719] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:24.398 12:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.398 12:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:24.398 12:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:24.398 12:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:24.398 12:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:24.398 12:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:24.398 12:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:24.398 12:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.398 12:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.398 12:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.398 12:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.398 12:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.398 12:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.398 12:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.398 12:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:24.398 12:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.398 12:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.398 "name": "Existed_Raid", 00:14:24.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.398 "strip_size_kb": 0, 00:14:24.398 "state": "configuring", 00:14:24.398 "raid_level": "raid1", 00:14:24.398 "superblock": false, 00:14:24.398 "num_base_bdevs": 3, 00:14:24.398 "num_base_bdevs_discovered": 1, 00:14:24.398 "num_base_bdevs_operational": 3, 00:14:24.398 "base_bdevs_list": [ 00:14:24.398 { 00:14:24.398 "name": "BaseBdev1", 00:14:24.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.398 "is_configured": false, 00:14:24.398 "data_offset": 0, 00:14:24.398 "data_size": 0 00:14:24.398 }, 00:14:24.398 { 00:14:24.398 "name": null, 00:14:24.398 "uuid": "e1824a88-0ed2-4099-911e-3b478112fa86", 00:14:24.398 "is_configured": false, 00:14:24.398 "data_offset": 0, 00:14:24.398 "data_size": 65536 00:14:24.398 }, 00:14:24.398 { 00:14:24.398 "name": "BaseBdev3", 00:14:24.398 "uuid": "74a51bcc-3cab-4754-a456-4260b61819ee", 00:14:24.398 "is_configured": true, 00:14:24.398 "data_offset": 0, 00:14:24.398 "data_size": 65536 00:14:24.398 } 00:14:24.398 ] 00:14:24.398 }' 00:14:24.398 12:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.398 12:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.657 12:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.657 12:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.657 12:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.657 12:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:24.657 12:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.657 12:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:24.657 12:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:24.657 12:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.657 12:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.917 [2024-11-25 12:13:20.784610] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:24.917 BaseBdev1 00:14:24.917 12:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.917 12:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:24.917 12:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:24.917 12:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:24.917 12:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:24.917 12:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:24.917 12:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:24.917 12:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:24.917 12:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.917 12:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.917 12:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.917 12:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:24.917 12:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.917 12:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.917 [ 00:14:24.917 { 00:14:24.917 "name": "BaseBdev1", 00:14:24.917 "aliases": [ 00:14:24.917 "9f69ff5c-d721-4fe3-b60a-e5eafece1f25" 00:14:24.917 ], 00:14:24.917 "product_name": "Malloc disk", 00:14:24.917 "block_size": 512, 00:14:24.917 "num_blocks": 65536, 00:14:24.917 "uuid": "9f69ff5c-d721-4fe3-b60a-e5eafece1f25", 00:14:24.917 "assigned_rate_limits": { 00:14:24.917 "rw_ios_per_sec": 0, 00:14:24.917 "rw_mbytes_per_sec": 0, 00:14:24.917 "r_mbytes_per_sec": 0, 00:14:24.917 "w_mbytes_per_sec": 0 00:14:24.917 }, 00:14:24.917 "claimed": true, 00:14:24.917 "claim_type": "exclusive_write", 00:14:24.917 "zoned": false, 00:14:24.917 "supported_io_types": { 00:14:24.917 "read": true, 00:14:24.917 "write": true, 00:14:24.917 "unmap": true, 00:14:24.917 "flush": true, 00:14:24.917 "reset": true, 00:14:24.917 "nvme_admin": false, 00:14:24.917 "nvme_io": false, 00:14:24.917 "nvme_io_md": false, 00:14:24.917 "write_zeroes": true, 00:14:24.917 "zcopy": true, 00:14:24.917 "get_zone_info": false, 00:14:24.917 "zone_management": false, 00:14:24.917 "zone_append": false, 00:14:24.917 "compare": false, 00:14:24.917 "compare_and_write": false, 00:14:24.917 "abort": true, 00:14:24.917 "seek_hole": false, 00:14:24.917 "seek_data": false, 00:14:24.917 "copy": true, 00:14:24.917 "nvme_iov_md": false 00:14:24.917 }, 00:14:24.917 "memory_domains": [ 00:14:24.917 { 00:14:24.917 "dma_device_id": "system", 00:14:24.917 "dma_device_type": 1 00:14:24.917 }, 00:14:24.917 { 00:14:24.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.917 "dma_device_type": 2 00:14:24.917 } 00:14:24.917 ], 00:14:24.917 "driver_specific": {} 00:14:24.917 } 00:14:24.917 ] 00:14:24.917 12:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.917 12:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:24.917 12:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:24.917 12:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:24.917 12:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:24.917 12:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:24.917 12:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:24.917 12:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:24.917 12:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.917 12:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.917 12:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.917 12:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.917 12:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.917 12:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.917 12:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:24.917 12:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.917 12:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.917 12:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.917 "name": "Existed_Raid", 00:14:24.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.917 "strip_size_kb": 0, 00:14:24.917 "state": "configuring", 00:14:24.917 "raid_level": "raid1", 00:14:24.917 "superblock": false, 00:14:24.917 "num_base_bdevs": 3, 00:14:24.917 "num_base_bdevs_discovered": 2, 00:14:24.917 "num_base_bdevs_operational": 3, 00:14:24.917 "base_bdevs_list": [ 00:14:24.917 { 00:14:24.917 "name": "BaseBdev1", 00:14:24.917 "uuid": "9f69ff5c-d721-4fe3-b60a-e5eafece1f25", 00:14:24.917 "is_configured": true, 00:14:24.917 "data_offset": 0, 00:14:24.917 "data_size": 65536 00:14:24.917 }, 00:14:24.917 { 00:14:24.917 "name": null, 00:14:24.917 "uuid": "e1824a88-0ed2-4099-911e-3b478112fa86", 00:14:24.917 "is_configured": false, 00:14:24.917 "data_offset": 0, 00:14:24.917 "data_size": 65536 00:14:24.917 }, 00:14:24.917 { 00:14:24.917 "name": "BaseBdev3", 00:14:24.917 "uuid": "74a51bcc-3cab-4754-a456-4260b61819ee", 00:14:24.917 "is_configured": true, 00:14:24.917 "data_offset": 0, 00:14:24.917 "data_size": 65536 00:14:24.917 } 00:14:24.917 ] 00:14:24.917 }' 00:14:24.917 12:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.917 12:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.176 12:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:25.176 12:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.176 12:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.176 12:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.435 12:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.435 12:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:25.435 12:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:25.435 12:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.435 12:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.435 [2024-11-25 12:13:21.300807] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:25.435 12:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.435 12:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:25.435 12:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:25.435 12:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:25.435 12:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:25.435 12:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:25.435 12:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:25.435 12:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.435 12:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.435 12:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.435 12:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.435 12:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:25.435 12:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.435 12:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.435 12:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.435 12:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.435 12:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.435 "name": "Existed_Raid", 00:14:25.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.435 "strip_size_kb": 0, 00:14:25.435 "state": "configuring", 00:14:25.435 "raid_level": "raid1", 00:14:25.435 "superblock": false, 00:14:25.435 "num_base_bdevs": 3, 00:14:25.435 "num_base_bdevs_discovered": 1, 00:14:25.435 "num_base_bdevs_operational": 3, 00:14:25.435 "base_bdevs_list": [ 00:14:25.435 { 00:14:25.435 "name": "BaseBdev1", 00:14:25.435 "uuid": "9f69ff5c-d721-4fe3-b60a-e5eafece1f25", 00:14:25.435 "is_configured": true, 00:14:25.435 "data_offset": 0, 00:14:25.435 "data_size": 65536 00:14:25.435 }, 00:14:25.435 { 00:14:25.435 "name": null, 00:14:25.435 "uuid": "e1824a88-0ed2-4099-911e-3b478112fa86", 00:14:25.435 "is_configured": false, 00:14:25.435 "data_offset": 0, 00:14:25.435 "data_size": 65536 00:14:25.435 }, 00:14:25.435 { 00:14:25.435 "name": null, 00:14:25.435 "uuid": "74a51bcc-3cab-4754-a456-4260b61819ee", 00:14:25.435 "is_configured": false, 00:14:25.435 "data_offset": 0, 00:14:25.435 "data_size": 65536 00:14:25.435 } 00:14:25.435 ] 00:14:25.435 }' 00:14:25.435 12:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.435 12:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.003 12:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.003 12:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:26.003 12:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.003 12:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.003 12:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.003 12:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:26.003 12:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:26.003 12:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.003 12:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.003 [2024-11-25 12:13:21.849007] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:26.003 12:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.003 12:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:26.003 12:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:26.003 12:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:26.003 12:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:26.003 12:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:26.003 12:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:26.003 12:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.003 12:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.003 12:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.003 12:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.003 12:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.003 12:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:26.003 12:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.003 12:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.003 12:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.003 12:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.003 "name": "Existed_Raid", 00:14:26.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.003 "strip_size_kb": 0, 00:14:26.003 "state": "configuring", 00:14:26.003 "raid_level": "raid1", 00:14:26.003 "superblock": false, 00:14:26.003 "num_base_bdevs": 3, 00:14:26.003 "num_base_bdevs_discovered": 2, 00:14:26.003 "num_base_bdevs_operational": 3, 00:14:26.003 "base_bdevs_list": [ 00:14:26.003 { 00:14:26.003 "name": "BaseBdev1", 00:14:26.003 "uuid": "9f69ff5c-d721-4fe3-b60a-e5eafece1f25", 00:14:26.003 "is_configured": true, 00:14:26.003 "data_offset": 0, 00:14:26.003 "data_size": 65536 00:14:26.003 }, 00:14:26.003 { 00:14:26.003 "name": null, 00:14:26.003 "uuid": "e1824a88-0ed2-4099-911e-3b478112fa86", 00:14:26.003 "is_configured": false, 00:14:26.003 "data_offset": 0, 00:14:26.003 "data_size": 65536 00:14:26.003 }, 00:14:26.003 { 00:14:26.003 "name": "BaseBdev3", 00:14:26.003 "uuid": "74a51bcc-3cab-4754-a456-4260b61819ee", 00:14:26.003 "is_configured": true, 00:14:26.003 "data_offset": 0, 00:14:26.003 "data_size": 65536 00:14:26.003 } 00:14:26.003 ] 00:14:26.003 }' 00:14:26.003 12:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.003 12:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.572 12:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.572 12:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.572 12:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.572 12:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:26.572 12:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.572 12:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:26.572 12:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:26.572 12:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.572 12:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.572 [2024-11-25 12:13:22.421162] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:26.572 12:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.572 12:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:26.572 12:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:26.572 12:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:26.572 12:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:26.572 12:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:26.572 12:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:26.572 12:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.572 12:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.572 12:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.572 12:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.572 12:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.572 12:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.572 12:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.572 12:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:26.572 12:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.572 12:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.572 "name": "Existed_Raid", 00:14:26.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.572 "strip_size_kb": 0, 00:14:26.572 "state": "configuring", 00:14:26.572 "raid_level": "raid1", 00:14:26.572 "superblock": false, 00:14:26.572 "num_base_bdevs": 3, 00:14:26.572 "num_base_bdevs_discovered": 1, 00:14:26.572 "num_base_bdevs_operational": 3, 00:14:26.572 "base_bdevs_list": [ 00:14:26.572 { 00:14:26.572 "name": null, 00:14:26.572 "uuid": "9f69ff5c-d721-4fe3-b60a-e5eafece1f25", 00:14:26.572 "is_configured": false, 00:14:26.572 "data_offset": 0, 00:14:26.572 "data_size": 65536 00:14:26.572 }, 00:14:26.572 { 00:14:26.572 "name": null, 00:14:26.572 "uuid": "e1824a88-0ed2-4099-911e-3b478112fa86", 00:14:26.572 "is_configured": false, 00:14:26.572 "data_offset": 0, 00:14:26.572 "data_size": 65536 00:14:26.572 }, 00:14:26.572 { 00:14:26.572 "name": "BaseBdev3", 00:14:26.572 "uuid": "74a51bcc-3cab-4754-a456-4260b61819ee", 00:14:26.572 "is_configured": true, 00:14:26.572 "data_offset": 0, 00:14:26.572 "data_size": 65536 00:14:26.572 } 00:14:26.572 ] 00:14:26.572 }' 00:14:26.572 12:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.572 12:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.139 12:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.139 12:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:27.139 12:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.139 12:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.140 12:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.140 12:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:27.140 12:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:27.140 12:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.140 12:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.140 [2024-11-25 12:13:23.087977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:27.140 12:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.140 12:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:27.140 12:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:27.140 12:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:27.140 12:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:27.140 12:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:27.140 12:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:27.140 12:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.140 12:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.140 12:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.140 12:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.140 12:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.140 12:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:27.140 12:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.140 12:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.140 12:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.140 12:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.140 "name": "Existed_Raid", 00:14:27.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.140 "strip_size_kb": 0, 00:14:27.140 "state": "configuring", 00:14:27.140 "raid_level": "raid1", 00:14:27.140 "superblock": false, 00:14:27.140 "num_base_bdevs": 3, 00:14:27.140 "num_base_bdevs_discovered": 2, 00:14:27.140 "num_base_bdevs_operational": 3, 00:14:27.140 "base_bdevs_list": [ 00:14:27.140 { 00:14:27.140 "name": null, 00:14:27.140 "uuid": "9f69ff5c-d721-4fe3-b60a-e5eafece1f25", 00:14:27.140 "is_configured": false, 00:14:27.140 "data_offset": 0, 00:14:27.140 "data_size": 65536 00:14:27.140 }, 00:14:27.140 { 00:14:27.140 "name": "BaseBdev2", 00:14:27.140 "uuid": "e1824a88-0ed2-4099-911e-3b478112fa86", 00:14:27.140 "is_configured": true, 00:14:27.140 "data_offset": 0, 00:14:27.140 "data_size": 65536 00:14:27.140 }, 00:14:27.140 { 00:14:27.140 "name": "BaseBdev3", 00:14:27.140 "uuid": "74a51bcc-3cab-4754-a456-4260b61819ee", 00:14:27.140 "is_configured": true, 00:14:27.140 "data_offset": 0, 00:14:27.140 "data_size": 65536 00:14:27.140 } 00:14:27.140 ] 00:14:27.140 }' 00:14:27.140 12:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.140 12:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.707 12:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.707 12:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.707 12:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.707 12:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:27.707 12:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.707 12:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:27.707 12:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.707 12:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.707 12:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.707 12:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:27.707 12:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.707 12:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9f69ff5c-d721-4fe3-b60a-e5eafece1f25 00:14:27.707 12:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.707 12:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.707 [2024-11-25 12:13:23.782209] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:27.707 [2024-11-25 12:13:23.782274] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:27.707 [2024-11-25 12:13:23.782287] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:27.707 [2024-11-25 12:13:23.782650] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:27.707 [2024-11-25 12:13:23.782863] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:27.708 [2024-11-25 12:13:23.782886] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:27.708 [2024-11-25 12:13:23.783200] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:27.708 NewBaseBdev 00:14:27.708 12:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.708 12:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:27.708 12:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:27.708 12:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:27.708 12:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:27.708 12:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:27.708 12:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:27.708 12:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:27.708 12:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.708 12:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.708 12:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.708 12:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:27.708 12:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.708 12:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.966 [ 00:14:27.966 { 00:14:27.966 "name": "NewBaseBdev", 00:14:27.966 "aliases": [ 00:14:27.966 "9f69ff5c-d721-4fe3-b60a-e5eafece1f25" 00:14:27.966 ], 00:14:27.966 "product_name": "Malloc disk", 00:14:27.966 "block_size": 512, 00:14:27.966 "num_blocks": 65536, 00:14:27.966 "uuid": "9f69ff5c-d721-4fe3-b60a-e5eafece1f25", 00:14:27.966 "assigned_rate_limits": { 00:14:27.966 "rw_ios_per_sec": 0, 00:14:27.966 "rw_mbytes_per_sec": 0, 00:14:27.966 "r_mbytes_per_sec": 0, 00:14:27.966 "w_mbytes_per_sec": 0 00:14:27.966 }, 00:14:27.966 "claimed": true, 00:14:27.966 "claim_type": "exclusive_write", 00:14:27.966 "zoned": false, 00:14:27.966 "supported_io_types": { 00:14:27.966 "read": true, 00:14:27.966 "write": true, 00:14:27.966 "unmap": true, 00:14:27.966 "flush": true, 00:14:27.966 "reset": true, 00:14:27.966 "nvme_admin": false, 00:14:27.966 "nvme_io": false, 00:14:27.966 "nvme_io_md": false, 00:14:27.966 "write_zeroes": true, 00:14:27.966 "zcopy": true, 00:14:27.966 "get_zone_info": false, 00:14:27.966 "zone_management": false, 00:14:27.966 "zone_append": false, 00:14:27.966 "compare": false, 00:14:27.966 "compare_and_write": false, 00:14:27.966 "abort": true, 00:14:27.966 "seek_hole": false, 00:14:27.966 "seek_data": false, 00:14:27.966 "copy": true, 00:14:27.966 "nvme_iov_md": false 00:14:27.966 }, 00:14:27.966 "memory_domains": [ 00:14:27.966 { 00:14:27.966 "dma_device_id": "system", 00:14:27.966 "dma_device_type": 1 00:14:27.966 }, 00:14:27.966 { 00:14:27.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.966 "dma_device_type": 2 00:14:27.966 } 00:14:27.966 ], 00:14:27.966 "driver_specific": {} 00:14:27.966 } 00:14:27.966 ] 00:14:27.966 12:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.966 12:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:27.966 12:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:14:27.966 12:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:27.966 12:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:27.966 12:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:27.966 12:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:27.966 12:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:27.966 12:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.966 12:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.966 12:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.966 12:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.966 12:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.966 12:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.966 12:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.966 12:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:27.966 12:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.966 12:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.966 "name": "Existed_Raid", 00:14:27.966 "uuid": "51a0d9d6-338d-4d44-98e9-351c81276d35", 00:14:27.966 "strip_size_kb": 0, 00:14:27.966 "state": "online", 00:14:27.966 "raid_level": "raid1", 00:14:27.966 "superblock": false, 00:14:27.966 "num_base_bdevs": 3, 00:14:27.966 "num_base_bdevs_discovered": 3, 00:14:27.966 "num_base_bdevs_operational": 3, 00:14:27.966 "base_bdevs_list": [ 00:14:27.966 { 00:14:27.967 "name": "NewBaseBdev", 00:14:27.967 "uuid": "9f69ff5c-d721-4fe3-b60a-e5eafece1f25", 00:14:27.967 "is_configured": true, 00:14:27.967 "data_offset": 0, 00:14:27.967 "data_size": 65536 00:14:27.967 }, 00:14:27.967 { 00:14:27.967 "name": "BaseBdev2", 00:14:27.967 "uuid": "e1824a88-0ed2-4099-911e-3b478112fa86", 00:14:27.967 "is_configured": true, 00:14:27.967 "data_offset": 0, 00:14:27.967 "data_size": 65536 00:14:27.967 }, 00:14:27.967 { 00:14:27.967 "name": "BaseBdev3", 00:14:27.967 "uuid": "74a51bcc-3cab-4754-a456-4260b61819ee", 00:14:27.967 "is_configured": true, 00:14:27.967 "data_offset": 0, 00:14:27.967 "data_size": 65536 00:14:27.967 } 00:14:27.967 ] 00:14:27.967 }' 00:14:27.967 12:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.967 12:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.224 12:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:28.224 12:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:28.224 12:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:28.224 12:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:28.224 12:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:28.224 12:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:28.224 12:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:28.224 12:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:28.224 12:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.224 12:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.482 [2024-11-25 12:13:24.318809] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:28.482 12:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.483 12:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:28.483 "name": "Existed_Raid", 00:14:28.483 "aliases": [ 00:14:28.483 "51a0d9d6-338d-4d44-98e9-351c81276d35" 00:14:28.483 ], 00:14:28.483 "product_name": "Raid Volume", 00:14:28.483 "block_size": 512, 00:14:28.483 "num_blocks": 65536, 00:14:28.483 "uuid": "51a0d9d6-338d-4d44-98e9-351c81276d35", 00:14:28.483 "assigned_rate_limits": { 00:14:28.483 "rw_ios_per_sec": 0, 00:14:28.483 "rw_mbytes_per_sec": 0, 00:14:28.483 "r_mbytes_per_sec": 0, 00:14:28.483 "w_mbytes_per_sec": 0 00:14:28.483 }, 00:14:28.483 "claimed": false, 00:14:28.483 "zoned": false, 00:14:28.483 "supported_io_types": { 00:14:28.483 "read": true, 00:14:28.483 "write": true, 00:14:28.483 "unmap": false, 00:14:28.483 "flush": false, 00:14:28.483 "reset": true, 00:14:28.483 "nvme_admin": false, 00:14:28.483 "nvme_io": false, 00:14:28.483 "nvme_io_md": false, 00:14:28.483 "write_zeroes": true, 00:14:28.483 "zcopy": false, 00:14:28.483 "get_zone_info": false, 00:14:28.483 "zone_management": false, 00:14:28.483 "zone_append": false, 00:14:28.483 "compare": false, 00:14:28.483 "compare_and_write": false, 00:14:28.483 "abort": false, 00:14:28.483 "seek_hole": false, 00:14:28.483 "seek_data": false, 00:14:28.483 "copy": false, 00:14:28.483 "nvme_iov_md": false 00:14:28.483 }, 00:14:28.483 "memory_domains": [ 00:14:28.483 { 00:14:28.483 "dma_device_id": "system", 00:14:28.483 "dma_device_type": 1 00:14:28.483 }, 00:14:28.483 { 00:14:28.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:28.483 "dma_device_type": 2 00:14:28.483 }, 00:14:28.483 { 00:14:28.483 "dma_device_id": "system", 00:14:28.483 "dma_device_type": 1 00:14:28.483 }, 00:14:28.483 { 00:14:28.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:28.483 "dma_device_type": 2 00:14:28.483 }, 00:14:28.483 { 00:14:28.483 "dma_device_id": "system", 00:14:28.483 "dma_device_type": 1 00:14:28.483 }, 00:14:28.483 { 00:14:28.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:28.483 "dma_device_type": 2 00:14:28.483 } 00:14:28.483 ], 00:14:28.483 "driver_specific": { 00:14:28.483 "raid": { 00:14:28.483 "uuid": "51a0d9d6-338d-4d44-98e9-351c81276d35", 00:14:28.483 "strip_size_kb": 0, 00:14:28.483 "state": "online", 00:14:28.483 "raid_level": "raid1", 00:14:28.483 "superblock": false, 00:14:28.483 "num_base_bdevs": 3, 00:14:28.483 "num_base_bdevs_discovered": 3, 00:14:28.483 "num_base_bdevs_operational": 3, 00:14:28.483 "base_bdevs_list": [ 00:14:28.483 { 00:14:28.483 "name": "NewBaseBdev", 00:14:28.483 "uuid": "9f69ff5c-d721-4fe3-b60a-e5eafece1f25", 00:14:28.483 "is_configured": true, 00:14:28.483 "data_offset": 0, 00:14:28.483 "data_size": 65536 00:14:28.483 }, 00:14:28.483 { 00:14:28.483 "name": "BaseBdev2", 00:14:28.483 "uuid": "e1824a88-0ed2-4099-911e-3b478112fa86", 00:14:28.483 "is_configured": true, 00:14:28.483 "data_offset": 0, 00:14:28.483 "data_size": 65536 00:14:28.483 }, 00:14:28.483 { 00:14:28.483 "name": "BaseBdev3", 00:14:28.483 "uuid": "74a51bcc-3cab-4754-a456-4260b61819ee", 00:14:28.483 "is_configured": true, 00:14:28.483 "data_offset": 0, 00:14:28.483 "data_size": 65536 00:14:28.483 } 00:14:28.483 ] 00:14:28.483 } 00:14:28.483 } 00:14:28.483 }' 00:14:28.483 12:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:28.483 12:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:28.483 BaseBdev2 00:14:28.483 BaseBdev3' 00:14:28.483 12:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:28.483 12:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:28.483 12:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:28.483 12:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:28.483 12:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:28.483 12:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.483 12:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.483 12:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.483 12:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:28.483 12:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:28.483 12:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:28.483 12:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:28.483 12:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:28.483 12:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.483 12:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.483 12:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.483 12:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:28.483 12:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:28.483 12:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:28.742 12:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:28.742 12:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.742 12:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.742 12:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:28.742 12:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.742 12:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:28.742 12:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:28.742 12:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:28.742 12:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.742 12:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.742 [2024-11-25 12:13:24.630543] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:28.742 [2024-11-25 12:13:24.630713] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:28.742 [2024-11-25 12:13:24.630841] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:28.742 [2024-11-25 12:13:24.631218] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:28.742 [2024-11-25 12:13:24.631236] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:28.742 12:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.742 12:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67464 00:14:28.742 12:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67464 ']' 00:14:28.742 12:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67464 00:14:28.742 12:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:14:28.742 12:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:28.742 12:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67464 00:14:28.742 12:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:28.742 killing process with pid 67464 00:14:28.742 12:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:28.742 12:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67464' 00:14:28.742 12:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67464 00:14:28.742 [2024-11-25 12:13:24.668048] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:28.742 12:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67464 00:14:29.000 [2024-11-25 12:13:24.941558] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:29.936 12:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:29.936 00:14:29.936 real 0m11.604s 00:14:29.936 user 0m19.193s 00:14:29.936 sys 0m1.530s 00:14:29.936 ************************************ 00:14:29.936 END TEST raid_state_function_test 00:14:29.936 ************************************ 00:14:29.936 12:13:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:29.936 12:13:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.194 12:13:26 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:14:30.194 12:13:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:30.194 12:13:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:30.194 12:13:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:30.194 ************************************ 00:14:30.194 START TEST raid_state_function_test_sb 00:14:30.194 ************************************ 00:14:30.194 12:13:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:14:30.194 12:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:14:30.194 12:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:30.194 12:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:30.194 12:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:30.194 12:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:30.194 12:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:30.194 12:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:30.194 12:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:30.194 12:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:30.194 12:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:30.194 12:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:30.194 12:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:30.194 12:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:30.194 12:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:30.194 12:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:30.194 12:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:30.194 12:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:30.194 12:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:30.194 12:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:30.194 12:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:30.195 12:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:30.195 12:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:14:30.195 12:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:14:30.195 12:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:30.195 12:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:30.195 Process raid pid: 68096 00:14:30.195 12:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68096 00:14:30.195 12:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68096' 00:14:30.195 12:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:30.195 12:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68096 00:14:30.195 12:13:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 68096 ']' 00:14:30.195 12:13:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:30.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:30.195 12:13:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:30.195 12:13:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:30.195 12:13:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:30.195 12:13:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.195 [2024-11-25 12:13:26.167655] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:14:30.195 [2024-11-25 12:13:26.167815] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:30.453 [2024-11-25 12:13:26.340861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:30.453 [2024-11-25 12:13:26.472993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:30.709 [2024-11-25 12:13:26.680658] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:30.709 [2024-11-25 12:13:26.680924] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:31.275 12:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:31.275 12:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:31.275 12:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:31.275 12:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.275 12:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.275 [2024-11-25 12:13:27.120977] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:31.275 [2024-11-25 12:13:27.121100] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:31.276 [2024-11-25 12:13:27.121123] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:31.276 [2024-11-25 12:13:27.121145] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:31.276 [2024-11-25 12:13:27.121158] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:31.276 [2024-11-25 12:13:27.121177] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:31.276 12:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.276 12:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:31.276 12:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:31.276 12:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:31.276 12:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:31.276 12:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:31.276 12:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:31.276 12:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.276 12:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.276 12:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.276 12:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.276 12:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.276 12:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.276 12:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.276 12:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:31.276 12:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.276 12:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.276 "name": "Existed_Raid", 00:14:31.276 "uuid": "3cf0abc9-af65-4e21-80ec-b6dfa559c417", 00:14:31.276 "strip_size_kb": 0, 00:14:31.276 "state": "configuring", 00:14:31.276 "raid_level": "raid1", 00:14:31.276 "superblock": true, 00:14:31.276 "num_base_bdevs": 3, 00:14:31.276 "num_base_bdevs_discovered": 0, 00:14:31.276 "num_base_bdevs_operational": 3, 00:14:31.276 "base_bdevs_list": [ 00:14:31.276 { 00:14:31.276 "name": "BaseBdev1", 00:14:31.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.276 "is_configured": false, 00:14:31.276 "data_offset": 0, 00:14:31.276 "data_size": 0 00:14:31.276 }, 00:14:31.276 { 00:14:31.276 "name": "BaseBdev2", 00:14:31.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.276 "is_configured": false, 00:14:31.276 "data_offset": 0, 00:14:31.276 "data_size": 0 00:14:31.276 }, 00:14:31.276 { 00:14:31.276 "name": "BaseBdev3", 00:14:31.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.276 "is_configured": false, 00:14:31.276 "data_offset": 0, 00:14:31.276 "data_size": 0 00:14:31.276 } 00:14:31.276 ] 00:14:31.276 }' 00:14:31.276 12:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.276 12:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.841 12:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:31.841 12:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.841 12:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.841 [2024-11-25 12:13:27.665060] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:31.841 [2024-11-25 12:13:27.665151] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:31.841 12:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.841 12:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:31.841 12:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.841 12:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.841 [2024-11-25 12:13:27.672960] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:31.841 [2024-11-25 12:13:27.673037] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:31.841 [2024-11-25 12:13:27.673058] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:31.841 [2024-11-25 12:13:27.673080] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:31.841 [2024-11-25 12:13:27.673093] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:31.841 [2024-11-25 12:13:27.673112] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:31.841 12:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.841 12:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:31.841 12:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.841 12:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.841 [2024-11-25 12:13:27.726561] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:31.841 BaseBdev1 00:14:31.841 12:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.841 12:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:31.841 12:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:31.841 12:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:31.841 12:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:31.842 12:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:31.842 12:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:31.842 12:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:31.842 12:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.842 12:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.842 12:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.842 12:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:31.842 12:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.842 12:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.842 [ 00:14:31.842 { 00:14:31.842 "name": "BaseBdev1", 00:14:31.842 "aliases": [ 00:14:31.842 "9a5e606d-8202-4fb6-b3aa-20c0dfa6ce32" 00:14:31.842 ], 00:14:31.842 "product_name": "Malloc disk", 00:14:31.842 "block_size": 512, 00:14:31.842 "num_blocks": 65536, 00:14:31.842 "uuid": "9a5e606d-8202-4fb6-b3aa-20c0dfa6ce32", 00:14:31.842 "assigned_rate_limits": { 00:14:31.842 "rw_ios_per_sec": 0, 00:14:31.842 "rw_mbytes_per_sec": 0, 00:14:31.842 "r_mbytes_per_sec": 0, 00:14:31.842 "w_mbytes_per_sec": 0 00:14:31.842 }, 00:14:31.842 "claimed": true, 00:14:31.842 "claim_type": "exclusive_write", 00:14:31.842 "zoned": false, 00:14:31.842 "supported_io_types": { 00:14:31.842 "read": true, 00:14:31.842 "write": true, 00:14:31.842 "unmap": true, 00:14:31.842 "flush": true, 00:14:31.842 "reset": true, 00:14:31.842 "nvme_admin": false, 00:14:31.842 "nvme_io": false, 00:14:31.842 "nvme_io_md": false, 00:14:31.842 "write_zeroes": true, 00:14:31.842 "zcopy": true, 00:14:31.842 "get_zone_info": false, 00:14:31.842 "zone_management": false, 00:14:31.842 "zone_append": false, 00:14:31.842 "compare": false, 00:14:31.842 "compare_and_write": false, 00:14:31.842 "abort": true, 00:14:31.842 "seek_hole": false, 00:14:31.842 "seek_data": false, 00:14:31.842 "copy": true, 00:14:31.842 "nvme_iov_md": false 00:14:31.842 }, 00:14:31.842 "memory_domains": [ 00:14:31.842 { 00:14:31.842 "dma_device_id": "system", 00:14:31.842 "dma_device_type": 1 00:14:31.842 }, 00:14:31.842 { 00:14:31.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.842 "dma_device_type": 2 00:14:31.842 } 00:14:31.842 ], 00:14:31.842 "driver_specific": {} 00:14:31.842 } 00:14:31.842 ] 00:14:31.842 12:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.842 12:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:31.842 12:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:31.842 12:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:31.842 12:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:31.842 12:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:31.842 12:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:31.842 12:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:31.842 12:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.842 12:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.842 12:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.842 12:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.842 12:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:31.842 12:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.842 12:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.842 12:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.842 12:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.842 12:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.842 "name": "Existed_Raid", 00:14:31.842 "uuid": "034fd6dc-1deb-4076-b21d-5ba5e29e22a7", 00:14:31.842 "strip_size_kb": 0, 00:14:31.842 "state": "configuring", 00:14:31.842 "raid_level": "raid1", 00:14:31.842 "superblock": true, 00:14:31.842 "num_base_bdevs": 3, 00:14:31.842 "num_base_bdevs_discovered": 1, 00:14:31.842 "num_base_bdevs_operational": 3, 00:14:31.842 "base_bdevs_list": [ 00:14:31.842 { 00:14:31.842 "name": "BaseBdev1", 00:14:31.842 "uuid": "9a5e606d-8202-4fb6-b3aa-20c0dfa6ce32", 00:14:31.842 "is_configured": true, 00:14:31.842 "data_offset": 2048, 00:14:31.842 "data_size": 63488 00:14:31.842 }, 00:14:31.842 { 00:14:31.842 "name": "BaseBdev2", 00:14:31.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.842 "is_configured": false, 00:14:31.842 "data_offset": 0, 00:14:31.842 "data_size": 0 00:14:31.842 }, 00:14:31.842 { 00:14:31.842 "name": "BaseBdev3", 00:14:31.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.842 "is_configured": false, 00:14:31.842 "data_offset": 0, 00:14:31.842 "data_size": 0 00:14:31.842 } 00:14:31.842 ] 00:14:31.842 }' 00:14:31.842 12:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.842 12:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.409 12:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:32.409 12:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.409 12:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.409 [2024-11-25 12:13:28.298830] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:32.409 [2024-11-25 12:13:28.298953] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:32.409 12:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.409 12:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:32.409 12:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.409 12:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.409 [2024-11-25 12:13:28.306825] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:32.409 [2024-11-25 12:13:28.309543] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:32.409 [2024-11-25 12:13:28.309883] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:32.409 [2024-11-25 12:13:28.309918] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:32.409 [2024-11-25 12:13:28.309941] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:32.409 12:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.409 12:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:32.409 12:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:32.409 12:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:32.409 12:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:32.409 12:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:32.409 12:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:32.409 12:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:32.409 12:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:32.409 12:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.409 12:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.409 12:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.409 12:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.409 12:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.409 12:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.409 12:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.409 12:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:32.409 12:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.409 12:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.409 "name": "Existed_Raid", 00:14:32.409 "uuid": "fe963e41-7e14-4fd7-854f-185d3656f8f3", 00:14:32.409 "strip_size_kb": 0, 00:14:32.409 "state": "configuring", 00:14:32.409 "raid_level": "raid1", 00:14:32.409 "superblock": true, 00:14:32.409 "num_base_bdevs": 3, 00:14:32.409 "num_base_bdevs_discovered": 1, 00:14:32.409 "num_base_bdevs_operational": 3, 00:14:32.409 "base_bdevs_list": [ 00:14:32.409 { 00:14:32.409 "name": "BaseBdev1", 00:14:32.409 "uuid": "9a5e606d-8202-4fb6-b3aa-20c0dfa6ce32", 00:14:32.409 "is_configured": true, 00:14:32.409 "data_offset": 2048, 00:14:32.409 "data_size": 63488 00:14:32.409 }, 00:14:32.409 { 00:14:32.409 "name": "BaseBdev2", 00:14:32.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.409 "is_configured": false, 00:14:32.409 "data_offset": 0, 00:14:32.409 "data_size": 0 00:14:32.409 }, 00:14:32.409 { 00:14:32.409 "name": "BaseBdev3", 00:14:32.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.409 "is_configured": false, 00:14:32.409 "data_offset": 0, 00:14:32.409 "data_size": 0 00:14:32.409 } 00:14:32.409 ] 00:14:32.409 }' 00:14:32.409 12:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.410 12:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.976 12:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:32.976 12:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.976 12:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.976 [2024-11-25 12:13:28.842197] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:32.976 BaseBdev2 00:14:32.976 12:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.976 12:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:32.976 12:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:32.976 12:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:32.976 12:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:32.976 12:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:32.976 12:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:32.976 12:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:32.976 12:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.976 12:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.976 12:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.976 12:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:32.976 12:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.976 12:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.976 [ 00:14:32.976 { 00:14:32.976 "name": "BaseBdev2", 00:14:32.976 "aliases": [ 00:14:32.976 "2121be52-514a-40e3-9aa4-7a2b5dfc2573" 00:14:32.976 ], 00:14:32.976 "product_name": "Malloc disk", 00:14:32.976 "block_size": 512, 00:14:32.976 "num_blocks": 65536, 00:14:32.976 "uuid": "2121be52-514a-40e3-9aa4-7a2b5dfc2573", 00:14:32.976 "assigned_rate_limits": { 00:14:32.976 "rw_ios_per_sec": 0, 00:14:32.976 "rw_mbytes_per_sec": 0, 00:14:32.976 "r_mbytes_per_sec": 0, 00:14:32.976 "w_mbytes_per_sec": 0 00:14:32.976 }, 00:14:32.976 "claimed": true, 00:14:32.976 "claim_type": "exclusive_write", 00:14:32.976 "zoned": false, 00:14:32.976 "supported_io_types": { 00:14:32.976 "read": true, 00:14:32.976 "write": true, 00:14:32.976 "unmap": true, 00:14:32.976 "flush": true, 00:14:32.976 "reset": true, 00:14:32.976 "nvme_admin": false, 00:14:32.976 "nvme_io": false, 00:14:32.976 "nvme_io_md": false, 00:14:32.976 "write_zeroes": true, 00:14:32.976 "zcopy": true, 00:14:32.976 "get_zone_info": false, 00:14:32.976 "zone_management": false, 00:14:32.976 "zone_append": false, 00:14:32.976 "compare": false, 00:14:32.976 "compare_and_write": false, 00:14:32.976 "abort": true, 00:14:32.976 "seek_hole": false, 00:14:32.976 "seek_data": false, 00:14:32.976 "copy": true, 00:14:32.976 "nvme_iov_md": false 00:14:32.976 }, 00:14:32.976 "memory_domains": [ 00:14:32.976 { 00:14:32.976 "dma_device_id": "system", 00:14:32.976 "dma_device_type": 1 00:14:32.976 }, 00:14:32.976 { 00:14:32.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:32.976 "dma_device_type": 2 00:14:32.976 } 00:14:32.976 ], 00:14:32.976 "driver_specific": {} 00:14:32.976 } 00:14:32.976 ] 00:14:32.976 12:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.976 12:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:32.976 12:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:32.976 12:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:32.976 12:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:32.976 12:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:32.976 12:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:32.976 12:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:32.976 12:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:32.976 12:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:32.976 12:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.976 12:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.976 12:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.976 12:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.976 12:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:32.976 12:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.976 12:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.977 12:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.977 12:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.977 12:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.977 "name": "Existed_Raid", 00:14:32.977 "uuid": "fe963e41-7e14-4fd7-854f-185d3656f8f3", 00:14:32.977 "strip_size_kb": 0, 00:14:32.977 "state": "configuring", 00:14:32.977 "raid_level": "raid1", 00:14:32.977 "superblock": true, 00:14:32.977 "num_base_bdevs": 3, 00:14:32.977 "num_base_bdevs_discovered": 2, 00:14:32.977 "num_base_bdevs_operational": 3, 00:14:32.977 "base_bdevs_list": [ 00:14:32.977 { 00:14:32.977 "name": "BaseBdev1", 00:14:32.977 "uuid": "9a5e606d-8202-4fb6-b3aa-20c0dfa6ce32", 00:14:32.977 "is_configured": true, 00:14:32.977 "data_offset": 2048, 00:14:32.977 "data_size": 63488 00:14:32.977 }, 00:14:32.977 { 00:14:32.977 "name": "BaseBdev2", 00:14:32.977 "uuid": "2121be52-514a-40e3-9aa4-7a2b5dfc2573", 00:14:32.977 "is_configured": true, 00:14:32.977 "data_offset": 2048, 00:14:32.977 "data_size": 63488 00:14:32.977 }, 00:14:32.977 { 00:14:32.977 "name": "BaseBdev3", 00:14:32.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.977 "is_configured": false, 00:14:32.977 "data_offset": 0, 00:14:32.977 "data_size": 0 00:14:32.977 } 00:14:32.977 ] 00:14:32.977 }' 00:14:32.977 12:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.977 12:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.566 12:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:33.566 12:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.566 12:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.566 [2024-11-25 12:13:29.431814] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:33.566 [2024-11-25 12:13:29.432225] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:33.566 [2024-11-25 12:13:29.432262] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:33.566 [2024-11-25 12:13:29.432710] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:33.566 BaseBdev3 00:14:33.566 [2024-11-25 12:13:29.432952] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:33.566 [2024-11-25 12:13:29.433228] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:33.566 [2024-11-25 12:13:29.433607] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:33.566 12:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.566 12:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:33.566 12:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:33.566 12:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:33.566 12:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:33.566 12:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:33.566 12:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:33.566 12:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:33.566 12:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.566 12:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.566 12:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.566 12:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:33.566 12:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.566 12:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.566 [ 00:14:33.566 { 00:14:33.566 "name": "BaseBdev3", 00:14:33.566 "aliases": [ 00:14:33.566 "fb551d49-fa5e-4469-8ad1-185ab158253a" 00:14:33.566 ], 00:14:33.566 "product_name": "Malloc disk", 00:14:33.566 "block_size": 512, 00:14:33.566 "num_blocks": 65536, 00:14:33.566 "uuid": "fb551d49-fa5e-4469-8ad1-185ab158253a", 00:14:33.566 "assigned_rate_limits": { 00:14:33.566 "rw_ios_per_sec": 0, 00:14:33.566 "rw_mbytes_per_sec": 0, 00:14:33.566 "r_mbytes_per_sec": 0, 00:14:33.566 "w_mbytes_per_sec": 0 00:14:33.566 }, 00:14:33.566 "claimed": true, 00:14:33.566 "claim_type": "exclusive_write", 00:14:33.566 "zoned": false, 00:14:33.566 "supported_io_types": { 00:14:33.566 "read": true, 00:14:33.566 "write": true, 00:14:33.566 "unmap": true, 00:14:33.566 "flush": true, 00:14:33.566 "reset": true, 00:14:33.566 "nvme_admin": false, 00:14:33.566 "nvme_io": false, 00:14:33.566 "nvme_io_md": false, 00:14:33.566 "write_zeroes": true, 00:14:33.566 "zcopy": true, 00:14:33.566 "get_zone_info": false, 00:14:33.566 "zone_management": false, 00:14:33.566 "zone_append": false, 00:14:33.566 "compare": false, 00:14:33.566 "compare_and_write": false, 00:14:33.566 "abort": true, 00:14:33.566 "seek_hole": false, 00:14:33.566 "seek_data": false, 00:14:33.566 "copy": true, 00:14:33.566 "nvme_iov_md": false 00:14:33.566 }, 00:14:33.566 "memory_domains": [ 00:14:33.566 { 00:14:33.566 "dma_device_id": "system", 00:14:33.566 "dma_device_type": 1 00:14:33.566 }, 00:14:33.566 { 00:14:33.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:33.566 "dma_device_type": 2 00:14:33.566 } 00:14:33.566 ], 00:14:33.566 "driver_specific": {} 00:14:33.566 } 00:14:33.566 ] 00:14:33.566 12:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.566 12:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:33.566 12:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:33.566 12:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:33.566 12:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:14:33.566 12:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:33.566 12:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:33.566 12:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:33.566 12:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:33.566 12:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:33.566 12:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.566 12:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.566 12:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.566 12:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.566 12:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:33.566 12:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.566 12:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.566 12:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.566 12:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.566 12:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.566 "name": "Existed_Raid", 00:14:33.566 "uuid": "fe963e41-7e14-4fd7-854f-185d3656f8f3", 00:14:33.566 "strip_size_kb": 0, 00:14:33.566 "state": "online", 00:14:33.566 "raid_level": "raid1", 00:14:33.566 "superblock": true, 00:14:33.566 "num_base_bdevs": 3, 00:14:33.566 "num_base_bdevs_discovered": 3, 00:14:33.566 "num_base_bdevs_operational": 3, 00:14:33.566 "base_bdevs_list": [ 00:14:33.566 { 00:14:33.566 "name": "BaseBdev1", 00:14:33.566 "uuid": "9a5e606d-8202-4fb6-b3aa-20c0dfa6ce32", 00:14:33.566 "is_configured": true, 00:14:33.566 "data_offset": 2048, 00:14:33.566 "data_size": 63488 00:14:33.566 }, 00:14:33.566 { 00:14:33.566 "name": "BaseBdev2", 00:14:33.566 "uuid": "2121be52-514a-40e3-9aa4-7a2b5dfc2573", 00:14:33.566 "is_configured": true, 00:14:33.566 "data_offset": 2048, 00:14:33.566 "data_size": 63488 00:14:33.566 }, 00:14:33.566 { 00:14:33.567 "name": "BaseBdev3", 00:14:33.567 "uuid": "fb551d49-fa5e-4469-8ad1-185ab158253a", 00:14:33.567 "is_configured": true, 00:14:33.567 "data_offset": 2048, 00:14:33.567 "data_size": 63488 00:14:33.567 } 00:14:33.567 ] 00:14:33.567 }' 00:14:33.567 12:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.567 12:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.134 12:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:34.134 12:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:34.134 12:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:34.134 12:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:34.134 12:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:34.134 12:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:34.134 12:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:34.134 12:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:34.134 12:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.134 12:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.134 [2024-11-25 12:13:29.980545] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:34.134 12:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.134 12:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:34.134 "name": "Existed_Raid", 00:14:34.134 "aliases": [ 00:14:34.134 "fe963e41-7e14-4fd7-854f-185d3656f8f3" 00:14:34.134 ], 00:14:34.134 "product_name": "Raid Volume", 00:14:34.134 "block_size": 512, 00:14:34.134 "num_blocks": 63488, 00:14:34.134 "uuid": "fe963e41-7e14-4fd7-854f-185d3656f8f3", 00:14:34.134 "assigned_rate_limits": { 00:14:34.134 "rw_ios_per_sec": 0, 00:14:34.134 "rw_mbytes_per_sec": 0, 00:14:34.134 "r_mbytes_per_sec": 0, 00:14:34.134 "w_mbytes_per_sec": 0 00:14:34.134 }, 00:14:34.134 "claimed": false, 00:14:34.134 "zoned": false, 00:14:34.134 "supported_io_types": { 00:14:34.134 "read": true, 00:14:34.134 "write": true, 00:14:34.134 "unmap": false, 00:14:34.134 "flush": false, 00:14:34.134 "reset": true, 00:14:34.134 "nvme_admin": false, 00:14:34.134 "nvme_io": false, 00:14:34.134 "nvme_io_md": false, 00:14:34.134 "write_zeroes": true, 00:14:34.134 "zcopy": false, 00:14:34.134 "get_zone_info": false, 00:14:34.134 "zone_management": false, 00:14:34.134 "zone_append": false, 00:14:34.134 "compare": false, 00:14:34.134 "compare_and_write": false, 00:14:34.134 "abort": false, 00:14:34.134 "seek_hole": false, 00:14:34.134 "seek_data": false, 00:14:34.134 "copy": false, 00:14:34.134 "nvme_iov_md": false 00:14:34.134 }, 00:14:34.134 "memory_domains": [ 00:14:34.134 { 00:14:34.134 "dma_device_id": "system", 00:14:34.134 "dma_device_type": 1 00:14:34.134 }, 00:14:34.134 { 00:14:34.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:34.134 "dma_device_type": 2 00:14:34.134 }, 00:14:34.134 { 00:14:34.134 "dma_device_id": "system", 00:14:34.134 "dma_device_type": 1 00:14:34.134 }, 00:14:34.134 { 00:14:34.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:34.135 "dma_device_type": 2 00:14:34.135 }, 00:14:34.135 { 00:14:34.135 "dma_device_id": "system", 00:14:34.135 "dma_device_type": 1 00:14:34.135 }, 00:14:34.135 { 00:14:34.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:34.135 "dma_device_type": 2 00:14:34.135 } 00:14:34.135 ], 00:14:34.135 "driver_specific": { 00:14:34.135 "raid": { 00:14:34.135 "uuid": "fe963e41-7e14-4fd7-854f-185d3656f8f3", 00:14:34.135 "strip_size_kb": 0, 00:14:34.135 "state": "online", 00:14:34.135 "raid_level": "raid1", 00:14:34.135 "superblock": true, 00:14:34.135 "num_base_bdevs": 3, 00:14:34.135 "num_base_bdevs_discovered": 3, 00:14:34.135 "num_base_bdevs_operational": 3, 00:14:34.135 "base_bdevs_list": [ 00:14:34.135 { 00:14:34.135 "name": "BaseBdev1", 00:14:34.135 "uuid": "9a5e606d-8202-4fb6-b3aa-20c0dfa6ce32", 00:14:34.135 "is_configured": true, 00:14:34.135 "data_offset": 2048, 00:14:34.135 "data_size": 63488 00:14:34.135 }, 00:14:34.135 { 00:14:34.135 "name": "BaseBdev2", 00:14:34.135 "uuid": "2121be52-514a-40e3-9aa4-7a2b5dfc2573", 00:14:34.135 "is_configured": true, 00:14:34.135 "data_offset": 2048, 00:14:34.135 "data_size": 63488 00:14:34.135 }, 00:14:34.135 { 00:14:34.135 "name": "BaseBdev3", 00:14:34.135 "uuid": "fb551d49-fa5e-4469-8ad1-185ab158253a", 00:14:34.135 "is_configured": true, 00:14:34.135 "data_offset": 2048, 00:14:34.135 "data_size": 63488 00:14:34.135 } 00:14:34.135 ] 00:14:34.135 } 00:14:34.135 } 00:14:34.135 }' 00:14:34.135 12:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:34.135 12:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:34.135 BaseBdev2 00:14:34.135 BaseBdev3' 00:14:34.135 12:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:34.135 12:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:34.135 12:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:34.135 12:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:34.135 12:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.135 12:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.135 12:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:34.135 12:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.135 12:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:34.135 12:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:34.135 12:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:34.135 12:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:34.135 12:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:34.135 12:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.135 12:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.135 12:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.135 12:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:34.135 12:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:34.135 12:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:34.135 12:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:34.135 12:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.135 12:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.135 12:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:34.393 12:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.393 12:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:34.393 12:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:34.393 12:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:34.393 12:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.393 12:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.393 [2024-11-25 12:13:30.260285] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:34.393 12:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.393 12:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:34.393 12:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:14:34.393 12:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:34.394 12:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:14:34.394 12:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:34.394 12:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:14:34.394 12:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:34.394 12:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:34.394 12:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:34.394 12:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:34.394 12:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:34.394 12:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.394 12:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.394 12:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.394 12:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.394 12:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.394 12:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:34.394 12:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.394 12:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.394 12:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.394 12:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.394 "name": "Existed_Raid", 00:14:34.394 "uuid": "fe963e41-7e14-4fd7-854f-185d3656f8f3", 00:14:34.394 "strip_size_kb": 0, 00:14:34.394 "state": "online", 00:14:34.394 "raid_level": "raid1", 00:14:34.394 "superblock": true, 00:14:34.394 "num_base_bdevs": 3, 00:14:34.394 "num_base_bdevs_discovered": 2, 00:14:34.394 "num_base_bdevs_operational": 2, 00:14:34.394 "base_bdevs_list": [ 00:14:34.394 { 00:14:34.394 "name": null, 00:14:34.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.394 "is_configured": false, 00:14:34.394 "data_offset": 0, 00:14:34.394 "data_size": 63488 00:14:34.394 }, 00:14:34.394 { 00:14:34.394 "name": "BaseBdev2", 00:14:34.394 "uuid": "2121be52-514a-40e3-9aa4-7a2b5dfc2573", 00:14:34.394 "is_configured": true, 00:14:34.394 "data_offset": 2048, 00:14:34.394 "data_size": 63488 00:14:34.394 }, 00:14:34.394 { 00:14:34.394 "name": "BaseBdev3", 00:14:34.394 "uuid": "fb551d49-fa5e-4469-8ad1-185ab158253a", 00:14:34.394 "is_configured": true, 00:14:34.394 "data_offset": 2048, 00:14:34.394 "data_size": 63488 00:14:34.394 } 00:14:34.394 ] 00:14:34.394 }' 00:14:34.394 12:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.394 12:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.959 12:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:34.959 12:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:34.959 12:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:34.959 12:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.959 12:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.959 12:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.959 12:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.959 12:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:34.960 12:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:34.960 12:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:34.960 12:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.960 12:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.960 [2024-11-25 12:13:30.895150] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:34.960 12:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.960 12:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:34.960 12:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:34.960 12:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.960 12:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:34.960 12:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.960 12:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.960 12:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.960 12:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:34.960 12:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:34.960 12:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:34.960 12:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.960 12:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.960 [2024-11-25 12:13:31.042837] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:34.960 [2024-11-25 12:13:31.042974] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:35.217 [2024-11-25 12:13:31.130801] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:35.217 [2024-11-25 12:13:31.130903] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:35.217 [2024-11-25 12:13:31.130925] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:35.217 12:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.217 12:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:35.218 12:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:35.218 12:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:35.218 12:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.218 12:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.218 12:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.218 12:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.218 12:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:35.218 12:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:35.218 12:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:35.218 12:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:35.218 12:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:35.218 12:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:35.218 12:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.218 12:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.218 BaseBdev2 00:14:35.218 12:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.218 12:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:35.218 12:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:35.218 12:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:35.218 12:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:35.218 12:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:35.218 12:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:35.218 12:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:35.218 12:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.218 12:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.218 12:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.218 12:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:35.218 12:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.218 12:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.218 [ 00:14:35.218 { 00:14:35.218 "name": "BaseBdev2", 00:14:35.218 "aliases": [ 00:14:35.218 "f147c802-8c66-453a-bff1-44beb5baec16" 00:14:35.218 ], 00:14:35.218 "product_name": "Malloc disk", 00:14:35.218 "block_size": 512, 00:14:35.218 "num_blocks": 65536, 00:14:35.218 "uuid": "f147c802-8c66-453a-bff1-44beb5baec16", 00:14:35.218 "assigned_rate_limits": { 00:14:35.218 "rw_ios_per_sec": 0, 00:14:35.218 "rw_mbytes_per_sec": 0, 00:14:35.218 "r_mbytes_per_sec": 0, 00:14:35.218 "w_mbytes_per_sec": 0 00:14:35.218 }, 00:14:35.218 "claimed": false, 00:14:35.218 "zoned": false, 00:14:35.218 "supported_io_types": { 00:14:35.218 "read": true, 00:14:35.218 "write": true, 00:14:35.218 "unmap": true, 00:14:35.218 "flush": true, 00:14:35.218 "reset": true, 00:14:35.218 "nvme_admin": false, 00:14:35.218 "nvme_io": false, 00:14:35.218 "nvme_io_md": false, 00:14:35.218 "write_zeroes": true, 00:14:35.218 "zcopy": true, 00:14:35.218 "get_zone_info": false, 00:14:35.218 "zone_management": false, 00:14:35.218 "zone_append": false, 00:14:35.218 "compare": false, 00:14:35.218 "compare_and_write": false, 00:14:35.218 "abort": true, 00:14:35.218 "seek_hole": false, 00:14:35.218 "seek_data": false, 00:14:35.218 "copy": true, 00:14:35.218 "nvme_iov_md": false 00:14:35.218 }, 00:14:35.218 "memory_domains": [ 00:14:35.218 { 00:14:35.218 "dma_device_id": "system", 00:14:35.218 "dma_device_type": 1 00:14:35.218 }, 00:14:35.218 { 00:14:35.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:35.218 "dma_device_type": 2 00:14:35.218 } 00:14:35.218 ], 00:14:35.218 "driver_specific": {} 00:14:35.218 } 00:14:35.218 ] 00:14:35.218 12:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.218 12:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:35.218 12:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:35.218 12:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:35.218 12:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:35.218 12:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.218 12:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.218 BaseBdev3 00:14:35.218 12:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.476 12:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:35.476 12:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:35.476 12:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:35.476 12:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:35.476 12:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:35.476 12:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:35.476 12:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:35.476 12:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.476 12:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.476 12:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.476 12:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:35.476 12:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.476 12:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.476 [ 00:14:35.476 { 00:14:35.476 "name": "BaseBdev3", 00:14:35.476 "aliases": [ 00:14:35.476 "afe6be88-dbd2-4e67-8d30-255a42afb229" 00:14:35.476 ], 00:14:35.476 "product_name": "Malloc disk", 00:14:35.476 "block_size": 512, 00:14:35.476 "num_blocks": 65536, 00:14:35.476 "uuid": "afe6be88-dbd2-4e67-8d30-255a42afb229", 00:14:35.476 "assigned_rate_limits": { 00:14:35.476 "rw_ios_per_sec": 0, 00:14:35.476 "rw_mbytes_per_sec": 0, 00:14:35.476 "r_mbytes_per_sec": 0, 00:14:35.476 "w_mbytes_per_sec": 0 00:14:35.476 }, 00:14:35.476 "claimed": false, 00:14:35.476 "zoned": false, 00:14:35.476 "supported_io_types": { 00:14:35.476 "read": true, 00:14:35.476 "write": true, 00:14:35.476 "unmap": true, 00:14:35.476 "flush": true, 00:14:35.476 "reset": true, 00:14:35.476 "nvme_admin": false, 00:14:35.476 "nvme_io": false, 00:14:35.476 "nvme_io_md": false, 00:14:35.476 "write_zeroes": true, 00:14:35.476 "zcopy": true, 00:14:35.476 "get_zone_info": false, 00:14:35.476 "zone_management": false, 00:14:35.476 "zone_append": false, 00:14:35.476 "compare": false, 00:14:35.476 "compare_and_write": false, 00:14:35.476 "abort": true, 00:14:35.476 "seek_hole": false, 00:14:35.476 "seek_data": false, 00:14:35.476 "copy": true, 00:14:35.476 "nvme_iov_md": false 00:14:35.476 }, 00:14:35.476 "memory_domains": [ 00:14:35.476 { 00:14:35.476 "dma_device_id": "system", 00:14:35.476 "dma_device_type": 1 00:14:35.476 }, 00:14:35.476 { 00:14:35.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:35.476 "dma_device_type": 2 00:14:35.476 } 00:14:35.476 ], 00:14:35.476 "driver_specific": {} 00:14:35.476 } 00:14:35.476 ] 00:14:35.476 12:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.476 12:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:35.476 12:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:35.476 12:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:35.476 12:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:35.476 12:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.476 12:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.476 [2024-11-25 12:13:31.344164] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:35.476 [2024-11-25 12:13:31.344237] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:35.476 [2024-11-25 12:13:31.344285] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:35.476 [2024-11-25 12:13:31.347006] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:35.476 12:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.476 12:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:35.477 12:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:35.477 12:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:35.477 12:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:35.477 12:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:35.477 12:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:35.477 12:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.477 12:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.477 12:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.477 12:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.477 12:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.477 12:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.477 12:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.477 12:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:35.477 12:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.477 12:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.477 "name": "Existed_Raid", 00:14:35.477 "uuid": "a3f70e07-8321-4b9a-96aa-17617764ff58", 00:14:35.477 "strip_size_kb": 0, 00:14:35.477 "state": "configuring", 00:14:35.477 "raid_level": "raid1", 00:14:35.477 "superblock": true, 00:14:35.477 "num_base_bdevs": 3, 00:14:35.477 "num_base_bdevs_discovered": 2, 00:14:35.477 "num_base_bdevs_operational": 3, 00:14:35.477 "base_bdevs_list": [ 00:14:35.477 { 00:14:35.477 "name": "BaseBdev1", 00:14:35.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.477 "is_configured": false, 00:14:35.477 "data_offset": 0, 00:14:35.477 "data_size": 0 00:14:35.477 }, 00:14:35.477 { 00:14:35.477 "name": "BaseBdev2", 00:14:35.477 "uuid": "f147c802-8c66-453a-bff1-44beb5baec16", 00:14:35.477 "is_configured": true, 00:14:35.477 "data_offset": 2048, 00:14:35.477 "data_size": 63488 00:14:35.477 }, 00:14:35.477 { 00:14:35.477 "name": "BaseBdev3", 00:14:35.477 "uuid": "afe6be88-dbd2-4e67-8d30-255a42afb229", 00:14:35.477 "is_configured": true, 00:14:35.477 "data_offset": 2048, 00:14:35.477 "data_size": 63488 00:14:35.477 } 00:14:35.477 ] 00:14:35.477 }' 00:14:35.477 12:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.477 12:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.761 12:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:35.761 12:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.761 12:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.761 [2024-11-25 12:13:31.844421] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:36.020 12:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.020 12:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:36.020 12:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:36.020 12:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:36.020 12:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:36.020 12:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:36.020 12:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:36.020 12:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.020 12:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.020 12:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.020 12:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.020 12:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.020 12:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.020 12:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:36.020 12:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.020 12:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.020 12:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.020 "name": "Existed_Raid", 00:14:36.020 "uuid": "a3f70e07-8321-4b9a-96aa-17617764ff58", 00:14:36.020 "strip_size_kb": 0, 00:14:36.020 "state": "configuring", 00:14:36.020 "raid_level": "raid1", 00:14:36.020 "superblock": true, 00:14:36.020 "num_base_bdevs": 3, 00:14:36.020 "num_base_bdevs_discovered": 1, 00:14:36.020 "num_base_bdevs_operational": 3, 00:14:36.020 "base_bdevs_list": [ 00:14:36.020 { 00:14:36.020 "name": "BaseBdev1", 00:14:36.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.020 "is_configured": false, 00:14:36.020 "data_offset": 0, 00:14:36.020 "data_size": 0 00:14:36.020 }, 00:14:36.020 { 00:14:36.020 "name": null, 00:14:36.020 "uuid": "f147c802-8c66-453a-bff1-44beb5baec16", 00:14:36.020 "is_configured": false, 00:14:36.020 "data_offset": 0, 00:14:36.020 "data_size": 63488 00:14:36.020 }, 00:14:36.020 { 00:14:36.020 "name": "BaseBdev3", 00:14:36.020 "uuid": "afe6be88-dbd2-4e67-8d30-255a42afb229", 00:14:36.020 "is_configured": true, 00:14:36.020 "data_offset": 2048, 00:14:36.020 "data_size": 63488 00:14:36.020 } 00:14:36.020 ] 00:14:36.020 }' 00:14:36.020 12:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.020 12:13:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.587 12:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:36.587 12:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.587 12:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.587 12:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.587 12:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.587 12:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:36.587 12:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:36.587 12:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.587 12:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.587 [2024-11-25 12:13:32.462376] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:36.587 BaseBdev1 00:14:36.587 12:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.587 12:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:36.587 12:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:36.587 12:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:36.587 12:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:36.587 12:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:36.587 12:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:36.587 12:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:36.587 12:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.587 12:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.587 12:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.587 12:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:36.587 12:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.587 12:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.587 [ 00:14:36.587 { 00:14:36.587 "name": "BaseBdev1", 00:14:36.587 "aliases": [ 00:14:36.587 "1e92999f-c81e-424f-8106-6af91552872d" 00:14:36.587 ], 00:14:36.587 "product_name": "Malloc disk", 00:14:36.587 "block_size": 512, 00:14:36.587 "num_blocks": 65536, 00:14:36.587 "uuid": "1e92999f-c81e-424f-8106-6af91552872d", 00:14:36.587 "assigned_rate_limits": { 00:14:36.587 "rw_ios_per_sec": 0, 00:14:36.587 "rw_mbytes_per_sec": 0, 00:14:36.587 "r_mbytes_per_sec": 0, 00:14:36.587 "w_mbytes_per_sec": 0 00:14:36.587 }, 00:14:36.587 "claimed": true, 00:14:36.587 "claim_type": "exclusive_write", 00:14:36.587 "zoned": false, 00:14:36.587 "supported_io_types": { 00:14:36.587 "read": true, 00:14:36.587 "write": true, 00:14:36.587 "unmap": true, 00:14:36.587 "flush": true, 00:14:36.587 "reset": true, 00:14:36.587 "nvme_admin": false, 00:14:36.587 "nvme_io": false, 00:14:36.587 "nvme_io_md": false, 00:14:36.587 "write_zeroes": true, 00:14:36.587 "zcopy": true, 00:14:36.587 "get_zone_info": false, 00:14:36.587 "zone_management": false, 00:14:36.587 "zone_append": false, 00:14:36.587 "compare": false, 00:14:36.587 "compare_and_write": false, 00:14:36.587 "abort": true, 00:14:36.587 "seek_hole": false, 00:14:36.587 "seek_data": false, 00:14:36.587 "copy": true, 00:14:36.587 "nvme_iov_md": false 00:14:36.587 }, 00:14:36.587 "memory_domains": [ 00:14:36.587 { 00:14:36.587 "dma_device_id": "system", 00:14:36.587 "dma_device_type": 1 00:14:36.587 }, 00:14:36.587 { 00:14:36.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:36.587 "dma_device_type": 2 00:14:36.587 } 00:14:36.587 ], 00:14:36.587 "driver_specific": {} 00:14:36.587 } 00:14:36.587 ] 00:14:36.587 12:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.587 12:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:36.587 12:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:36.587 12:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:36.587 12:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:36.587 12:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:36.587 12:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:36.587 12:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:36.587 12:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.587 12:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.587 12:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.587 12:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.587 12:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:36.587 12:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.587 12:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.587 12:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.587 12:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.587 12:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.587 "name": "Existed_Raid", 00:14:36.587 "uuid": "a3f70e07-8321-4b9a-96aa-17617764ff58", 00:14:36.587 "strip_size_kb": 0, 00:14:36.587 "state": "configuring", 00:14:36.587 "raid_level": "raid1", 00:14:36.587 "superblock": true, 00:14:36.587 "num_base_bdevs": 3, 00:14:36.587 "num_base_bdevs_discovered": 2, 00:14:36.587 "num_base_bdevs_operational": 3, 00:14:36.587 "base_bdevs_list": [ 00:14:36.587 { 00:14:36.587 "name": "BaseBdev1", 00:14:36.587 "uuid": "1e92999f-c81e-424f-8106-6af91552872d", 00:14:36.587 "is_configured": true, 00:14:36.587 "data_offset": 2048, 00:14:36.587 "data_size": 63488 00:14:36.587 }, 00:14:36.587 { 00:14:36.587 "name": null, 00:14:36.587 "uuid": "f147c802-8c66-453a-bff1-44beb5baec16", 00:14:36.587 "is_configured": false, 00:14:36.587 "data_offset": 0, 00:14:36.587 "data_size": 63488 00:14:36.587 }, 00:14:36.587 { 00:14:36.587 "name": "BaseBdev3", 00:14:36.587 "uuid": "afe6be88-dbd2-4e67-8d30-255a42afb229", 00:14:36.587 "is_configured": true, 00:14:36.587 "data_offset": 2048, 00:14:36.587 "data_size": 63488 00:14:36.587 } 00:14:36.587 ] 00:14:36.587 }' 00:14:36.587 12:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.587 12:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.152 12:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.152 12:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.152 12:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:37.152 12:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.152 12:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.152 12:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:37.152 12:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:37.152 12:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.152 12:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.152 [2024-11-25 12:13:32.998635] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:37.152 12:13:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.152 12:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:37.152 12:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:37.152 12:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:37.152 12:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:37.152 12:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:37.152 12:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:37.152 12:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.152 12:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.152 12:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.152 12:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.153 12:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.153 12:13:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.153 12:13:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.153 12:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:37.153 12:13:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.153 12:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.153 "name": "Existed_Raid", 00:14:37.153 "uuid": "a3f70e07-8321-4b9a-96aa-17617764ff58", 00:14:37.153 "strip_size_kb": 0, 00:14:37.153 "state": "configuring", 00:14:37.153 "raid_level": "raid1", 00:14:37.153 "superblock": true, 00:14:37.153 "num_base_bdevs": 3, 00:14:37.153 "num_base_bdevs_discovered": 1, 00:14:37.153 "num_base_bdevs_operational": 3, 00:14:37.153 "base_bdevs_list": [ 00:14:37.153 { 00:14:37.153 "name": "BaseBdev1", 00:14:37.153 "uuid": "1e92999f-c81e-424f-8106-6af91552872d", 00:14:37.153 "is_configured": true, 00:14:37.153 "data_offset": 2048, 00:14:37.153 "data_size": 63488 00:14:37.153 }, 00:14:37.153 { 00:14:37.153 "name": null, 00:14:37.153 "uuid": "f147c802-8c66-453a-bff1-44beb5baec16", 00:14:37.153 "is_configured": false, 00:14:37.153 "data_offset": 0, 00:14:37.153 "data_size": 63488 00:14:37.153 }, 00:14:37.153 { 00:14:37.153 "name": null, 00:14:37.153 "uuid": "afe6be88-dbd2-4e67-8d30-255a42afb229", 00:14:37.153 "is_configured": false, 00:14:37.153 "data_offset": 0, 00:14:37.153 "data_size": 63488 00:14:37.153 } 00:14:37.153 ] 00:14:37.153 }' 00:14:37.153 12:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.153 12:13:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.410 12:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.410 12:13:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.410 12:13:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.410 12:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:37.410 12:13:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.668 12:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:37.668 12:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:37.668 12:13:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.668 12:13:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.668 [2024-11-25 12:13:33.506835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:37.668 12:13:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.668 12:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:37.668 12:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:37.668 12:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:37.668 12:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:37.668 12:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:37.668 12:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:37.668 12:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.668 12:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.668 12:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.668 12:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.668 12:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.668 12:13:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.668 12:13:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.668 12:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:37.668 12:13:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.668 12:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.668 "name": "Existed_Raid", 00:14:37.668 "uuid": "a3f70e07-8321-4b9a-96aa-17617764ff58", 00:14:37.668 "strip_size_kb": 0, 00:14:37.668 "state": "configuring", 00:14:37.668 "raid_level": "raid1", 00:14:37.668 "superblock": true, 00:14:37.668 "num_base_bdevs": 3, 00:14:37.668 "num_base_bdevs_discovered": 2, 00:14:37.668 "num_base_bdevs_operational": 3, 00:14:37.668 "base_bdevs_list": [ 00:14:37.668 { 00:14:37.668 "name": "BaseBdev1", 00:14:37.668 "uuid": "1e92999f-c81e-424f-8106-6af91552872d", 00:14:37.668 "is_configured": true, 00:14:37.668 "data_offset": 2048, 00:14:37.668 "data_size": 63488 00:14:37.668 }, 00:14:37.668 { 00:14:37.668 "name": null, 00:14:37.668 "uuid": "f147c802-8c66-453a-bff1-44beb5baec16", 00:14:37.668 "is_configured": false, 00:14:37.668 "data_offset": 0, 00:14:37.668 "data_size": 63488 00:14:37.668 }, 00:14:37.668 { 00:14:37.668 "name": "BaseBdev3", 00:14:37.668 "uuid": "afe6be88-dbd2-4e67-8d30-255a42afb229", 00:14:37.668 "is_configured": true, 00:14:37.668 "data_offset": 2048, 00:14:37.668 "data_size": 63488 00:14:37.668 } 00:14:37.668 ] 00:14:37.668 }' 00:14:37.668 12:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.668 12:13:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.926 12:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:37.926 12:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.926 12:13:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.926 12:13:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.926 12:13:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.184 12:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:38.184 12:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:38.184 12:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.184 12:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.184 [2024-11-25 12:13:34.030965] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:38.184 12:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.184 12:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:38.184 12:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:38.184 12:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:38.184 12:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:38.184 12:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:38.184 12:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:38.184 12:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.184 12:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.184 12:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.184 12:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.184 12:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:38.184 12:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.185 12:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.185 12:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.185 12:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.185 12:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.185 "name": "Existed_Raid", 00:14:38.185 "uuid": "a3f70e07-8321-4b9a-96aa-17617764ff58", 00:14:38.185 "strip_size_kb": 0, 00:14:38.185 "state": "configuring", 00:14:38.185 "raid_level": "raid1", 00:14:38.185 "superblock": true, 00:14:38.185 "num_base_bdevs": 3, 00:14:38.185 "num_base_bdevs_discovered": 1, 00:14:38.185 "num_base_bdevs_operational": 3, 00:14:38.185 "base_bdevs_list": [ 00:14:38.185 { 00:14:38.185 "name": null, 00:14:38.185 "uuid": "1e92999f-c81e-424f-8106-6af91552872d", 00:14:38.185 "is_configured": false, 00:14:38.185 "data_offset": 0, 00:14:38.185 "data_size": 63488 00:14:38.185 }, 00:14:38.185 { 00:14:38.185 "name": null, 00:14:38.185 "uuid": "f147c802-8c66-453a-bff1-44beb5baec16", 00:14:38.185 "is_configured": false, 00:14:38.185 "data_offset": 0, 00:14:38.185 "data_size": 63488 00:14:38.185 }, 00:14:38.185 { 00:14:38.185 "name": "BaseBdev3", 00:14:38.185 "uuid": "afe6be88-dbd2-4e67-8d30-255a42afb229", 00:14:38.185 "is_configured": true, 00:14:38.185 "data_offset": 2048, 00:14:38.185 "data_size": 63488 00:14:38.185 } 00:14:38.185 ] 00:14:38.185 }' 00:14:38.185 12:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.185 12:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.752 12:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:38.752 12:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.752 12:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.752 12:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.752 12:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.752 12:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:38.752 12:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:38.752 12:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.752 12:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.752 [2024-11-25 12:13:34.711848] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:38.752 12:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.752 12:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:38.752 12:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:38.752 12:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:38.752 12:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:38.752 12:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:38.752 12:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:38.752 12:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.752 12:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.752 12:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.752 12:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.752 12:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.752 12:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.752 12:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.752 12:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:38.752 12:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.752 12:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.752 "name": "Existed_Raid", 00:14:38.752 "uuid": "a3f70e07-8321-4b9a-96aa-17617764ff58", 00:14:38.752 "strip_size_kb": 0, 00:14:38.752 "state": "configuring", 00:14:38.752 "raid_level": "raid1", 00:14:38.752 "superblock": true, 00:14:38.752 "num_base_bdevs": 3, 00:14:38.752 "num_base_bdevs_discovered": 2, 00:14:38.752 "num_base_bdevs_operational": 3, 00:14:38.752 "base_bdevs_list": [ 00:14:38.752 { 00:14:38.752 "name": null, 00:14:38.752 "uuid": "1e92999f-c81e-424f-8106-6af91552872d", 00:14:38.752 "is_configured": false, 00:14:38.752 "data_offset": 0, 00:14:38.752 "data_size": 63488 00:14:38.752 }, 00:14:38.752 { 00:14:38.752 "name": "BaseBdev2", 00:14:38.752 "uuid": "f147c802-8c66-453a-bff1-44beb5baec16", 00:14:38.752 "is_configured": true, 00:14:38.752 "data_offset": 2048, 00:14:38.752 "data_size": 63488 00:14:38.752 }, 00:14:38.752 { 00:14:38.752 "name": "BaseBdev3", 00:14:38.752 "uuid": "afe6be88-dbd2-4e67-8d30-255a42afb229", 00:14:38.752 "is_configured": true, 00:14:38.752 "data_offset": 2048, 00:14:38.752 "data_size": 63488 00:14:38.752 } 00:14:38.752 ] 00:14:38.752 }' 00:14:38.752 12:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.752 12:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.319 12:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:39.319 12:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.319 12:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.319 12:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.319 12:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.319 12:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:39.319 12:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.319 12:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:39.319 12:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.319 12:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.319 12:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.319 12:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 1e92999f-c81e-424f-8106-6af91552872d 00:14:39.319 12:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.319 12:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.319 [2024-11-25 12:13:35.389612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:39.319 [2024-11-25 12:13:35.389935] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:39.319 [2024-11-25 12:13:35.389954] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:39.319 NewBaseBdev 00:14:39.319 [2024-11-25 12:13:35.390303] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:39.319 [2024-11-25 12:13:35.390538] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:39.319 [2024-11-25 12:13:35.390563] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:39.319 [2024-11-25 12:13:35.390734] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:39.319 12:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.319 12:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:39.319 12:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:39.319 12:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:39.319 12:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:39.319 12:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:39.319 12:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:39.319 12:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:39.319 12:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.319 12:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.319 12:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.319 12:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:39.319 12:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.319 12:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.577 [ 00:14:39.577 { 00:14:39.577 "name": "NewBaseBdev", 00:14:39.577 "aliases": [ 00:14:39.577 "1e92999f-c81e-424f-8106-6af91552872d" 00:14:39.577 ], 00:14:39.577 "product_name": "Malloc disk", 00:14:39.577 "block_size": 512, 00:14:39.577 "num_blocks": 65536, 00:14:39.577 "uuid": "1e92999f-c81e-424f-8106-6af91552872d", 00:14:39.577 "assigned_rate_limits": { 00:14:39.577 "rw_ios_per_sec": 0, 00:14:39.577 "rw_mbytes_per_sec": 0, 00:14:39.577 "r_mbytes_per_sec": 0, 00:14:39.577 "w_mbytes_per_sec": 0 00:14:39.577 }, 00:14:39.578 "claimed": true, 00:14:39.578 "claim_type": "exclusive_write", 00:14:39.578 "zoned": false, 00:14:39.578 "supported_io_types": { 00:14:39.578 "read": true, 00:14:39.578 "write": true, 00:14:39.578 "unmap": true, 00:14:39.578 "flush": true, 00:14:39.578 "reset": true, 00:14:39.578 "nvme_admin": false, 00:14:39.578 "nvme_io": false, 00:14:39.578 "nvme_io_md": false, 00:14:39.578 "write_zeroes": true, 00:14:39.578 "zcopy": true, 00:14:39.578 "get_zone_info": false, 00:14:39.578 "zone_management": false, 00:14:39.578 "zone_append": false, 00:14:39.578 "compare": false, 00:14:39.578 "compare_and_write": false, 00:14:39.578 "abort": true, 00:14:39.578 "seek_hole": false, 00:14:39.578 "seek_data": false, 00:14:39.578 "copy": true, 00:14:39.578 "nvme_iov_md": false 00:14:39.578 }, 00:14:39.578 "memory_domains": [ 00:14:39.578 { 00:14:39.578 "dma_device_id": "system", 00:14:39.578 "dma_device_type": 1 00:14:39.578 }, 00:14:39.578 { 00:14:39.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.578 "dma_device_type": 2 00:14:39.578 } 00:14:39.578 ], 00:14:39.578 "driver_specific": {} 00:14:39.578 } 00:14:39.578 ] 00:14:39.578 12:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.578 12:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:39.578 12:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:14:39.578 12:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:39.578 12:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:39.578 12:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:39.578 12:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:39.578 12:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:39.578 12:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.578 12:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.578 12:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.578 12:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.578 12:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.578 12:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.578 12:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.578 12:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:39.578 12:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.578 12:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.578 "name": "Existed_Raid", 00:14:39.578 "uuid": "a3f70e07-8321-4b9a-96aa-17617764ff58", 00:14:39.578 "strip_size_kb": 0, 00:14:39.578 "state": "online", 00:14:39.578 "raid_level": "raid1", 00:14:39.578 "superblock": true, 00:14:39.578 "num_base_bdevs": 3, 00:14:39.578 "num_base_bdevs_discovered": 3, 00:14:39.578 "num_base_bdevs_operational": 3, 00:14:39.578 "base_bdevs_list": [ 00:14:39.578 { 00:14:39.578 "name": "NewBaseBdev", 00:14:39.578 "uuid": "1e92999f-c81e-424f-8106-6af91552872d", 00:14:39.578 "is_configured": true, 00:14:39.578 "data_offset": 2048, 00:14:39.578 "data_size": 63488 00:14:39.578 }, 00:14:39.578 { 00:14:39.578 "name": "BaseBdev2", 00:14:39.578 "uuid": "f147c802-8c66-453a-bff1-44beb5baec16", 00:14:39.578 "is_configured": true, 00:14:39.578 "data_offset": 2048, 00:14:39.578 "data_size": 63488 00:14:39.578 }, 00:14:39.578 { 00:14:39.578 "name": "BaseBdev3", 00:14:39.578 "uuid": "afe6be88-dbd2-4e67-8d30-255a42afb229", 00:14:39.578 "is_configured": true, 00:14:39.578 "data_offset": 2048, 00:14:39.578 "data_size": 63488 00:14:39.578 } 00:14:39.578 ] 00:14:39.578 }' 00:14:39.578 12:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.578 12:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.143 12:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:40.143 12:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:40.143 12:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:40.143 12:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:40.143 12:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:40.143 12:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:40.143 12:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:40.143 12:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:40.143 12:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.143 12:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.144 [2024-11-25 12:13:35.954286] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:40.144 12:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.144 12:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:40.144 "name": "Existed_Raid", 00:14:40.144 "aliases": [ 00:14:40.144 "a3f70e07-8321-4b9a-96aa-17617764ff58" 00:14:40.144 ], 00:14:40.144 "product_name": "Raid Volume", 00:14:40.144 "block_size": 512, 00:14:40.144 "num_blocks": 63488, 00:14:40.144 "uuid": "a3f70e07-8321-4b9a-96aa-17617764ff58", 00:14:40.144 "assigned_rate_limits": { 00:14:40.144 "rw_ios_per_sec": 0, 00:14:40.144 "rw_mbytes_per_sec": 0, 00:14:40.144 "r_mbytes_per_sec": 0, 00:14:40.144 "w_mbytes_per_sec": 0 00:14:40.144 }, 00:14:40.144 "claimed": false, 00:14:40.144 "zoned": false, 00:14:40.144 "supported_io_types": { 00:14:40.144 "read": true, 00:14:40.144 "write": true, 00:14:40.144 "unmap": false, 00:14:40.144 "flush": false, 00:14:40.144 "reset": true, 00:14:40.144 "nvme_admin": false, 00:14:40.144 "nvme_io": false, 00:14:40.144 "nvme_io_md": false, 00:14:40.144 "write_zeroes": true, 00:14:40.144 "zcopy": false, 00:14:40.144 "get_zone_info": false, 00:14:40.144 "zone_management": false, 00:14:40.144 "zone_append": false, 00:14:40.144 "compare": false, 00:14:40.144 "compare_and_write": false, 00:14:40.144 "abort": false, 00:14:40.144 "seek_hole": false, 00:14:40.144 "seek_data": false, 00:14:40.144 "copy": false, 00:14:40.144 "nvme_iov_md": false 00:14:40.144 }, 00:14:40.144 "memory_domains": [ 00:14:40.144 { 00:14:40.144 "dma_device_id": "system", 00:14:40.144 "dma_device_type": 1 00:14:40.144 }, 00:14:40.144 { 00:14:40.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:40.144 "dma_device_type": 2 00:14:40.144 }, 00:14:40.144 { 00:14:40.144 "dma_device_id": "system", 00:14:40.144 "dma_device_type": 1 00:14:40.144 }, 00:14:40.144 { 00:14:40.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:40.144 "dma_device_type": 2 00:14:40.144 }, 00:14:40.144 { 00:14:40.144 "dma_device_id": "system", 00:14:40.144 "dma_device_type": 1 00:14:40.144 }, 00:14:40.144 { 00:14:40.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:40.144 "dma_device_type": 2 00:14:40.144 } 00:14:40.144 ], 00:14:40.144 "driver_specific": { 00:14:40.144 "raid": { 00:14:40.144 "uuid": "a3f70e07-8321-4b9a-96aa-17617764ff58", 00:14:40.144 "strip_size_kb": 0, 00:14:40.144 "state": "online", 00:14:40.144 "raid_level": "raid1", 00:14:40.144 "superblock": true, 00:14:40.144 "num_base_bdevs": 3, 00:14:40.144 "num_base_bdevs_discovered": 3, 00:14:40.144 "num_base_bdevs_operational": 3, 00:14:40.144 "base_bdevs_list": [ 00:14:40.144 { 00:14:40.144 "name": "NewBaseBdev", 00:14:40.144 "uuid": "1e92999f-c81e-424f-8106-6af91552872d", 00:14:40.144 "is_configured": true, 00:14:40.144 "data_offset": 2048, 00:14:40.144 "data_size": 63488 00:14:40.144 }, 00:14:40.144 { 00:14:40.144 "name": "BaseBdev2", 00:14:40.144 "uuid": "f147c802-8c66-453a-bff1-44beb5baec16", 00:14:40.144 "is_configured": true, 00:14:40.144 "data_offset": 2048, 00:14:40.144 "data_size": 63488 00:14:40.144 }, 00:14:40.144 { 00:14:40.144 "name": "BaseBdev3", 00:14:40.144 "uuid": "afe6be88-dbd2-4e67-8d30-255a42afb229", 00:14:40.144 "is_configured": true, 00:14:40.144 "data_offset": 2048, 00:14:40.144 "data_size": 63488 00:14:40.144 } 00:14:40.144 ] 00:14:40.144 } 00:14:40.144 } 00:14:40.144 }' 00:14:40.144 12:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:40.144 12:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:40.144 BaseBdev2 00:14:40.144 BaseBdev3' 00:14:40.144 12:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:40.144 12:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:40.144 12:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:40.144 12:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:40.144 12:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:40.144 12:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.144 12:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.144 12:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.144 12:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:40.144 12:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:40.144 12:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:40.144 12:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:40.144 12:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:40.144 12:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.144 12:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.144 12:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.144 12:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:40.144 12:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:40.144 12:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:40.144 12:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:40.144 12:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.144 12:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.144 12:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:40.144 12:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.401 12:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:40.401 12:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:40.401 12:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:40.401 12:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.401 12:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.401 [2024-11-25 12:13:36.273958] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:40.401 [2024-11-25 12:13:36.274278] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:40.401 [2024-11-25 12:13:36.274440] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:40.401 [2024-11-25 12:13:36.274847] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:40.401 [2024-11-25 12:13:36.274866] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:40.401 12:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.401 12:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68096 00:14:40.401 12:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 68096 ']' 00:14:40.401 12:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 68096 00:14:40.401 12:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:40.401 12:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:40.402 12:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68096 00:14:40.402 killing process with pid 68096 00:14:40.402 12:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:40.402 12:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:40.402 12:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68096' 00:14:40.402 12:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 68096 00:14:40.402 [2024-11-25 12:13:36.313840] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:40.402 12:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 68096 00:14:40.659 [2024-11-25 12:13:36.599483] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:42.051 12:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:42.051 00:14:42.051 real 0m11.669s 00:14:42.051 user 0m19.209s 00:14:42.051 sys 0m1.589s 00:14:42.051 12:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:42.051 ************************************ 00:14:42.051 END TEST raid_state_function_test_sb 00:14:42.051 ************************************ 00:14:42.051 12:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.051 12:13:37 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:14:42.051 12:13:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:42.051 12:13:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:42.051 12:13:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:42.051 ************************************ 00:14:42.051 START TEST raid_superblock_test 00:14:42.051 ************************************ 00:14:42.051 12:13:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:14:42.051 12:13:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:14:42.051 12:13:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:14:42.051 12:13:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:42.051 12:13:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:42.051 12:13:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:42.051 12:13:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:42.051 12:13:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:42.051 12:13:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:42.051 12:13:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:42.051 12:13:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:42.051 12:13:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:42.051 12:13:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:42.051 12:13:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:42.051 12:13:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:14:42.051 12:13:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:14:42.051 12:13:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68728 00:14:42.051 12:13:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68728 00:14:42.051 12:13:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:42.051 12:13:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 68728 ']' 00:14:42.051 12:13:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:42.051 12:13:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:42.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:42.051 12:13:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:42.051 12:13:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:42.051 12:13:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.051 [2024-11-25 12:13:37.882662] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:14:42.051 [2024-11-25 12:13:37.882842] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68728 ] 00:14:42.051 [2024-11-25 12:13:38.068462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.310 [2024-11-25 12:13:38.250433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:42.569 [2024-11-25 12:13:38.456837] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:42.569 [2024-11-25 12:13:38.456914] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:42.828 12:13:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:42.828 12:13:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:14:42.828 12:13:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:42.828 12:13:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:42.828 12:13:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:42.828 12:13:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:42.828 12:13:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:42.828 12:13:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:42.828 12:13:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:42.828 12:13:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:42.828 12:13:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:42.828 12:13:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.828 12:13:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.828 malloc1 00:14:42.828 12:13:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.828 12:13:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:42.828 12:13:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.828 12:13:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.828 [2024-11-25 12:13:38.889533] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:42.828 [2024-11-25 12:13:38.889615] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:42.828 [2024-11-25 12:13:38.889647] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:42.828 [2024-11-25 12:13:38.889662] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:42.828 [2024-11-25 12:13:38.892554] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:42.828 [2024-11-25 12:13:38.892602] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:42.828 pt1 00:14:42.828 12:13:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.828 12:13:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:42.828 12:13:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:42.828 12:13:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:42.828 12:13:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:42.828 12:13:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:42.828 12:13:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:42.828 12:13:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:42.828 12:13:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:42.828 12:13:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:42.828 12:13:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.828 12:13:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.087 malloc2 00:14:43.087 12:13:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.087 12:13:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:43.087 12:13:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.087 12:13:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.087 [2024-11-25 12:13:38.945569] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:43.087 [2024-11-25 12:13:38.945641] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:43.087 [2024-11-25 12:13:38.945671] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:43.087 [2024-11-25 12:13:38.945685] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:43.087 [2024-11-25 12:13:38.948475] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:43.087 [2024-11-25 12:13:38.948654] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:43.087 pt2 00:14:43.087 12:13:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.087 12:13:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:43.087 12:13:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:43.087 12:13:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:43.087 12:13:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:43.087 12:13:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:43.087 12:13:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:43.087 12:13:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:43.087 12:13:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:43.087 12:13:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:43.087 12:13:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.087 12:13:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.087 malloc3 00:14:43.087 12:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.087 12:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:43.087 12:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.087 12:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.087 [2024-11-25 12:13:39.011777] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:43.087 [2024-11-25 12:13:39.011980] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:43.087 [2024-11-25 12:13:39.012029] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:43.087 [2024-11-25 12:13:39.012046] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:43.087 [2024-11-25 12:13:39.014831] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:43.087 [2024-11-25 12:13:39.014877] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:43.087 pt3 00:14:43.087 12:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.087 12:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:43.087 12:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:43.087 12:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:14:43.087 12:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.087 12:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.087 [2024-11-25 12:13:39.023832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:43.087 [2024-11-25 12:13:39.026269] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:43.087 [2024-11-25 12:13:39.026531] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:43.087 [2024-11-25 12:13:39.026762] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:43.087 [2024-11-25 12:13:39.026791] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:43.087 [2024-11-25 12:13:39.027113] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:43.087 [2024-11-25 12:13:39.027352] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:43.087 [2024-11-25 12:13:39.027374] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:43.087 [2024-11-25 12:13:39.027558] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:43.087 12:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.087 12:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:43.087 12:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:43.087 12:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:43.087 12:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:43.087 12:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:43.087 12:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:43.087 12:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.087 12:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.087 12:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.087 12:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.087 12:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.087 12:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.087 12:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.087 12:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.087 12:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.087 12:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.087 "name": "raid_bdev1", 00:14:43.087 "uuid": "6c7ceab7-387e-4a25-812a-c32c8ba6c82a", 00:14:43.087 "strip_size_kb": 0, 00:14:43.087 "state": "online", 00:14:43.087 "raid_level": "raid1", 00:14:43.087 "superblock": true, 00:14:43.087 "num_base_bdevs": 3, 00:14:43.087 "num_base_bdevs_discovered": 3, 00:14:43.087 "num_base_bdevs_operational": 3, 00:14:43.087 "base_bdevs_list": [ 00:14:43.087 { 00:14:43.087 "name": "pt1", 00:14:43.087 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:43.087 "is_configured": true, 00:14:43.087 "data_offset": 2048, 00:14:43.087 "data_size": 63488 00:14:43.087 }, 00:14:43.087 { 00:14:43.087 "name": "pt2", 00:14:43.087 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:43.087 "is_configured": true, 00:14:43.087 "data_offset": 2048, 00:14:43.087 "data_size": 63488 00:14:43.087 }, 00:14:43.087 { 00:14:43.087 "name": "pt3", 00:14:43.087 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:43.087 "is_configured": true, 00:14:43.087 "data_offset": 2048, 00:14:43.087 "data_size": 63488 00:14:43.087 } 00:14:43.087 ] 00:14:43.087 }' 00:14:43.087 12:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.087 12:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.654 12:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:43.654 12:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:43.654 12:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:43.654 12:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:43.654 12:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:43.654 12:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:43.654 12:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:43.654 12:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.654 12:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:43.654 12:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.654 [2024-11-25 12:13:39.512315] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:43.654 12:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.654 12:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:43.654 "name": "raid_bdev1", 00:14:43.654 "aliases": [ 00:14:43.654 "6c7ceab7-387e-4a25-812a-c32c8ba6c82a" 00:14:43.654 ], 00:14:43.654 "product_name": "Raid Volume", 00:14:43.654 "block_size": 512, 00:14:43.654 "num_blocks": 63488, 00:14:43.654 "uuid": "6c7ceab7-387e-4a25-812a-c32c8ba6c82a", 00:14:43.654 "assigned_rate_limits": { 00:14:43.654 "rw_ios_per_sec": 0, 00:14:43.654 "rw_mbytes_per_sec": 0, 00:14:43.654 "r_mbytes_per_sec": 0, 00:14:43.654 "w_mbytes_per_sec": 0 00:14:43.654 }, 00:14:43.654 "claimed": false, 00:14:43.654 "zoned": false, 00:14:43.654 "supported_io_types": { 00:14:43.654 "read": true, 00:14:43.654 "write": true, 00:14:43.654 "unmap": false, 00:14:43.654 "flush": false, 00:14:43.654 "reset": true, 00:14:43.654 "nvme_admin": false, 00:14:43.654 "nvme_io": false, 00:14:43.654 "nvme_io_md": false, 00:14:43.654 "write_zeroes": true, 00:14:43.654 "zcopy": false, 00:14:43.654 "get_zone_info": false, 00:14:43.654 "zone_management": false, 00:14:43.654 "zone_append": false, 00:14:43.654 "compare": false, 00:14:43.654 "compare_and_write": false, 00:14:43.654 "abort": false, 00:14:43.654 "seek_hole": false, 00:14:43.654 "seek_data": false, 00:14:43.654 "copy": false, 00:14:43.654 "nvme_iov_md": false 00:14:43.654 }, 00:14:43.654 "memory_domains": [ 00:14:43.654 { 00:14:43.654 "dma_device_id": "system", 00:14:43.654 "dma_device_type": 1 00:14:43.654 }, 00:14:43.654 { 00:14:43.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.654 "dma_device_type": 2 00:14:43.654 }, 00:14:43.654 { 00:14:43.654 "dma_device_id": "system", 00:14:43.654 "dma_device_type": 1 00:14:43.654 }, 00:14:43.654 { 00:14:43.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.654 "dma_device_type": 2 00:14:43.654 }, 00:14:43.654 { 00:14:43.654 "dma_device_id": "system", 00:14:43.654 "dma_device_type": 1 00:14:43.654 }, 00:14:43.654 { 00:14:43.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.654 "dma_device_type": 2 00:14:43.654 } 00:14:43.654 ], 00:14:43.654 "driver_specific": { 00:14:43.654 "raid": { 00:14:43.654 "uuid": "6c7ceab7-387e-4a25-812a-c32c8ba6c82a", 00:14:43.654 "strip_size_kb": 0, 00:14:43.654 "state": "online", 00:14:43.654 "raid_level": "raid1", 00:14:43.654 "superblock": true, 00:14:43.654 "num_base_bdevs": 3, 00:14:43.654 "num_base_bdevs_discovered": 3, 00:14:43.654 "num_base_bdevs_operational": 3, 00:14:43.654 "base_bdevs_list": [ 00:14:43.654 { 00:14:43.654 "name": "pt1", 00:14:43.654 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:43.654 "is_configured": true, 00:14:43.654 "data_offset": 2048, 00:14:43.654 "data_size": 63488 00:14:43.654 }, 00:14:43.654 { 00:14:43.654 "name": "pt2", 00:14:43.654 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:43.654 "is_configured": true, 00:14:43.654 "data_offset": 2048, 00:14:43.654 "data_size": 63488 00:14:43.654 }, 00:14:43.654 { 00:14:43.654 "name": "pt3", 00:14:43.654 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:43.654 "is_configured": true, 00:14:43.654 "data_offset": 2048, 00:14:43.654 "data_size": 63488 00:14:43.654 } 00:14:43.654 ] 00:14:43.654 } 00:14:43.654 } 00:14:43.654 }' 00:14:43.654 12:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:43.654 12:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:43.654 pt2 00:14:43.654 pt3' 00:14:43.654 12:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:43.654 12:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:43.654 12:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:43.654 12:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:43.654 12:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:43.654 12:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.654 12:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.654 12:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.654 12:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:43.654 12:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:43.654 12:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:43.654 12:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:43.654 12:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.654 12:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.654 12:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:43.654 12:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.912 12:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:43.912 12:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:43.912 12:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:43.912 12:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:43.912 12:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:43.912 12:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.912 12:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.912 12:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.912 12:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:43.912 12:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:43.912 12:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:43.912 12:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.912 12:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.912 12:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:43.912 [2024-11-25 12:13:39.824308] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:43.912 12:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.912 12:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=6c7ceab7-387e-4a25-812a-c32c8ba6c82a 00:14:43.912 12:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 6c7ceab7-387e-4a25-812a-c32c8ba6c82a ']' 00:14:43.912 12:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:43.912 12:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.912 12:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.912 [2024-11-25 12:13:39.871983] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:43.912 [2024-11-25 12:13:39.872034] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:43.912 [2024-11-25 12:13:39.872124] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:43.912 [2024-11-25 12:13:39.872232] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:43.912 [2024-11-25 12:13:39.872249] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:43.912 12:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.912 12:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:43.912 12:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.912 12:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.912 12:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.913 12:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.913 12:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:43.913 12:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:43.913 12:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:43.913 12:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:43.913 12:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.913 12:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.913 12:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.913 12:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:43.913 12:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:43.913 12:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.913 12:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.913 12:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.913 12:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:43.913 12:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:43.913 12:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.913 12:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.913 12:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.913 12:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:43.913 12:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.913 12:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:43.913 12:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.913 12:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.170 12:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:44.170 12:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:44.170 12:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:14:44.170 12:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:44.170 12:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:44.170 12:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:44.170 12:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:44.170 12:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:44.170 12:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:44.170 12:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.170 12:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.170 [2024-11-25 12:13:40.008116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:44.170 [2024-11-25 12:13:40.010693] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:44.170 [2024-11-25 12:13:40.010767] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:44.170 [2024-11-25 12:13:40.010842] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:44.170 [2024-11-25 12:13:40.010922] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:44.170 [2024-11-25 12:13:40.010958] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:44.170 [2024-11-25 12:13:40.010987] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:44.170 [2024-11-25 12:13:40.011004] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:14:44.170 request: 00:14:44.170 { 00:14:44.170 "name": "raid_bdev1", 00:14:44.170 "raid_level": "raid1", 00:14:44.170 "base_bdevs": [ 00:14:44.170 "malloc1", 00:14:44.170 "malloc2", 00:14:44.170 "malloc3" 00:14:44.170 ], 00:14:44.170 "superblock": false, 00:14:44.170 "method": "bdev_raid_create", 00:14:44.170 "req_id": 1 00:14:44.170 } 00:14:44.170 Got JSON-RPC error response 00:14:44.170 response: 00:14:44.170 { 00:14:44.170 "code": -17, 00:14:44.170 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:44.170 } 00:14:44.170 12:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:44.170 12:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:14:44.171 12:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:44.171 12:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:44.171 12:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:44.171 12:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.171 12:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.171 12:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.171 12:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:44.171 12:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.171 12:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:44.171 12:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:44.171 12:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:44.171 12:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.171 12:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.171 [2024-11-25 12:13:40.072056] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:44.171 [2024-11-25 12:13:40.072268] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.171 [2024-11-25 12:13:40.072359] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:44.171 [2024-11-25 12:13:40.072567] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.171 [2024-11-25 12:13:40.075494] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.171 [2024-11-25 12:13:40.075651] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:44.171 [2024-11-25 12:13:40.075858] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:44.171 [2024-11-25 12:13:40.076034] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:44.171 pt1 00:14:44.171 12:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.171 12:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:14:44.171 12:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:44.171 12:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:44.171 12:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:44.171 12:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:44.171 12:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:44.171 12:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.171 12:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.171 12:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.171 12:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.171 12:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.171 12:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.171 12:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.171 12:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.171 12:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.171 12:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.171 "name": "raid_bdev1", 00:14:44.171 "uuid": "6c7ceab7-387e-4a25-812a-c32c8ba6c82a", 00:14:44.171 "strip_size_kb": 0, 00:14:44.171 "state": "configuring", 00:14:44.171 "raid_level": "raid1", 00:14:44.171 "superblock": true, 00:14:44.171 "num_base_bdevs": 3, 00:14:44.171 "num_base_bdevs_discovered": 1, 00:14:44.171 "num_base_bdevs_operational": 3, 00:14:44.171 "base_bdevs_list": [ 00:14:44.171 { 00:14:44.171 "name": "pt1", 00:14:44.171 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:44.171 "is_configured": true, 00:14:44.171 "data_offset": 2048, 00:14:44.171 "data_size": 63488 00:14:44.171 }, 00:14:44.171 { 00:14:44.171 "name": null, 00:14:44.171 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:44.171 "is_configured": false, 00:14:44.171 "data_offset": 2048, 00:14:44.171 "data_size": 63488 00:14:44.171 }, 00:14:44.171 { 00:14:44.171 "name": null, 00:14:44.171 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:44.171 "is_configured": false, 00:14:44.171 "data_offset": 2048, 00:14:44.171 "data_size": 63488 00:14:44.171 } 00:14:44.171 ] 00:14:44.171 }' 00:14:44.171 12:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.171 12:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.737 12:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:14:44.737 12:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:44.737 12:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.737 12:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.737 [2024-11-25 12:13:40.584543] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:44.737 [2024-11-25 12:13:40.584630] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.737 [2024-11-25 12:13:40.584665] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:44.737 [2024-11-25 12:13:40.584680] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.737 [2024-11-25 12:13:40.585244] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.737 [2024-11-25 12:13:40.585286] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:44.737 [2024-11-25 12:13:40.585419] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:44.737 [2024-11-25 12:13:40.585453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:44.737 pt2 00:14:44.737 12:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.737 12:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:44.737 12:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.737 12:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.737 [2024-11-25 12:13:40.592521] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:44.737 12:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.737 12:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:14:44.737 12:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:44.737 12:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:44.737 12:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:44.737 12:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:44.737 12:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:44.737 12:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.737 12:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.737 12:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.737 12:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.737 12:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.737 12:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.737 12:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.738 12:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.738 12:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.738 12:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.738 "name": "raid_bdev1", 00:14:44.738 "uuid": "6c7ceab7-387e-4a25-812a-c32c8ba6c82a", 00:14:44.738 "strip_size_kb": 0, 00:14:44.738 "state": "configuring", 00:14:44.738 "raid_level": "raid1", 00:14:44.738 "superblock": true, 00:14:44.738 "num_base_bdevs": 3, 00:14:44.738 "num_base_bdevs_discovered": 1, 00:14:44.738 "num_base_bdevs_operational": 3, 00:14:44.738 "base_bdevs_list": [ 00:14:44.738 { 00:14:44.738 "name": "pt1", 00:14:44.738 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:44.738 "is_configured": true, 00:14:44.738 "data_offset": 2048, 00:14:44.738 "data_size": 63488 00:14:44.738 }, 00:14:44.738 { 00:14:44.738 "name": null, 00:14:44.738 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:44.738 "is_configured": false, 00:14:44.738 "data_offset": 0, 00:14:44.738 "data_size": 63488 00:14:44.738 }, 00:14:44.738 { 00:14:44.738 "name": null, 00:14:44.738 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:44.738 "is_configured": false, 00:14:44.738 "data_offset": 2048, 00:14:44.738 "data_size": 63488 00:14:44.738 } 00:14:44.738 ] 00:14:44.738 }' 00:14:44.738 12:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.738 12:13:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.305 12:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:45.305 12:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:45.305 12:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:45.305 12:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.305 12:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.305 [2024-11-25 12:13:41.088663] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:45.305 [2024-11-25 12:13:41.088957] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:45.305 [2024-11-25 12:13:41.088995] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:14:45.305 [2024-11-25 12:13:41.089014] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:45.305 [2024-11-25 12:13:41.089616] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:45.305 [2024-11-25 12:13:41.089649] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:45.305 [2024-11-25 12:13:41.089751] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:45.305 [2024-11-25 12:13:41.089805] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:45.305 pt2 00:14:45.305 12:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.305 12:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:45.305 12:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:45.305 12:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:45.305 12:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.305 12:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.305 [2024-11-25 12:13:41.096627] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:45.305 [2024-11-25 12:13:41.096687] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:45.305 [2024-11-25 12:13:41.096708] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:45.305 [2024-11-25 12:13:41.096735] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:45.305 [2024-11-25 12:13:41.097177] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:45.305 [2024-11-25 12:13:41.097218] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:45.305 [2024-11-25 12:13:41.097293] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:45.305 [2024-11-25 12:13:41.097324] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:45.305 [2024-11-25 12:13:41.097493] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:45.305 [2024-11-25 12:13:41.097519] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:45.305 [2024-11-25 12:13:41.097817] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:45.305 [2024-11-25 12:13:41.098041] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:45.305 [2024-11-25 12:13:41.098058] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:45.305 [2024-11-25 12:13:41.098230] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:45.305 pt3 00:14:45.305 12:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.305 12:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:45.305 12:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:45.305 12:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:45.305 12:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:45.305 12:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:45.305 12:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:45.305 12:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:45.305 12:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:45.305 12:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.306 12:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.306 12:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.306 12:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.306 12:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.306 12:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.306 12:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.306 12:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.306 12:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.306 12:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.306 "name": "raid_bdev1", 00:14:45.306 "uuid": "6c7ceab7-387e-4a25-812a-c32c8ba6c82a", 00:14:45.306 "strip_size_kb": 0, 00:14:45.306 "state": "online", 00:14:45.306 "raid_level": "raid1", 00:14:45.306 "superblock": true, 00:14:45.306 "num_base_bdevs": 3, 00:14:45.306 "num_base_bdevs_discovered": 3, 00:14:45.306 "num_base_bdevs_operational": 3, 00:14:45.306 "base_bdevs_list": [ 00:14:45.306 { 00:14:45.306 "name": "pt1", 00:14:45.306 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:45.306 "is_configured": true, 00:14:45.306 "data_offset": 2048, 00:14:45.306 "data_size": 63488 00:14:45.306 }, 00:14:45.306 { 00:14:45.306 "name": "pt2", 00:14:45.306 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:45.306 "is_configured": true, 00:14:45.306 "data_offset": 2048, 00:14:45.306 "data_size": 63488 00:14:45.306 }, 00:14:45.306 { 00:14:45.306 "name": "pt3", 00:14:45.306 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:45.306 "is_configured": true, 00:14:45.306 "data_offset": 2048, 00:14:45.306 "data_size": 63488 00:14:45.306 } 00:14:45.306 ] 00:14:45.306 }' 00:14:45.306 12:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.306 12:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.564 12:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:45.564 12:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:45.564 12:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:45.564 12:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:45.564 12:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:45.564 12:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:45.564 12:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:45.564 12:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:45.564 12:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.564 12:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.564 [2024-11-25 12:13:41.601184] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:45.564 12:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.564 12:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:45.564 "name": "raid_bdev1", 00:14:45.564 "aliases": [ 00:14:45.564 "6c7ceab7-387e-4a25-812a-c32c8ba6c82a" 00:14:45.564 ], 00:14:45.564 "product_name": "Raid Volume", 00:14:45.564 "block_size": 512, 00:14:45.564 "num_blocks": 63488, 00:14:45.564 "uuid": "6c7ceab7-387e-4a25-812a-c32c8ba6c82a", 00:14:45.564 "assigned_rate_limits": { 00:14:45.564 "rw_ios_per_sec": 0, 00:14:45.564 "rw_mbytes_per_sec": 0, 00:14:45.564 "r_mbytes_per_sec": 0, 00:14:45.564 "w_mbytes_per_sec": 0 00:14:45.564 }, 00:14:45.564 "claimed": false, 00:14:45.564 "zoned": false, 00:14:45.564 "supported_io_types": { 00:14:45.564 "read": true, 00:14:45.564 "write": true, 00:14:45.564 "unmap": false, 00:14:45.564 "flush": false, 00:14:45.564 "reset": true, 00:14:45.564 "nvme_admin": false, 00:14:45.564 "nvme_io": false, 00:14:45.564 "nvme_io_md": false, 00:14:45.564 "write_zeroes": true, 00:14:45.564 "zcopy": false, 00:14:45.564 "get_zone_info": false, 00:14:45.564 "zone_management": false, 00:14:45.564 "zone_append": false, 00:14:45.564 "compare": false, 00:14:45.564 "compare_and_write": false, 00:14:45.564 "abort": false, 00:14:45.564 "seek_hole": false, 00:14:45.564 "seek_data": false, 00:14:45.564 "copy": false, 00:14:45.564 "nvme_iov_md": false 00:14:45.564 }, 00:14:45.564 "memory_domains": [ 00:14:45.564 { 00:14:45.564 "dma_device_id": "system", 00:14:45.564 "dma_device_type": 1 00:14:45.564 }, 00:14:45.564 { 00:14:45.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:45.564 "dma_device_type": 2 00:14:45.564 }, 00:14:45.564 { 00:14:45.564 "dma_device_id": "system", 00:14:45.564 "dma_device_type": 1 00:14:45.564 }, 00:14:45.564 { 00:14:45.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:45.564 "dma_device_type": 2 00:14:45.564 }, 00:14:45.564 { 00:14:45.564 "dma_device_id": "system", 00:14:45.564 "dma_device_type": 1 00:14:45.564 }, 00:14:45.564 { 00:14:45.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:45.564 "dma_device_type": 2 00:14:45.564 } 00:14:45.564 ], 00:14:45.564 "driver_specific": { 00:14:45.564 "raid": { 00:14:45.564 "uuid": "6c7ceab7-387e-4a25-812a-c32c8ba6c82a", 00:14:45.564 "strip_size_kb": 0, 00:14:45.564 "state": "online", 00:14:45.564 "raid_level": "raid1", 00:14:45.564 "superblock": true, 00:14:45.564 "num_base_bdevs": 3, 00:14:45.564 "num_base_bdevs_discovered": 3, 00:14:45.564 "num_base_bdevs_operational": 3, 00:14:45.564 "base_bdevs_list": [ 00:14:45.564 { 00:14:45.565 "name": "pt1", 00:14:45.565 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:45.565 "is_configured": true, 00:14:45.565 "data_offset": 2048, 00:14:45.565 "data_size": 63488 00:14:45.565 }, 00:14:45.565 { 00:14:45.565 "name": "pt2", 00:14:45.565 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:45.565 "is_configured": true, 00:14:45.565 "data_offset": 2048, 00:14:45.565 "data_size": 63488 00:14:45.565 }, 00:14:45.565 { 00:14:45.565 "name": "pt3", 00:14:45.565 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:45.565 "is_configured": true, 00:14:45.565 "data_offset": 2048, 00:14:45.565 "data_size": 63488 00:14:45.565 } 00:14:45.565 ] 00:14:45.565 } 00:14:45.565 } 00:14:45.565 }' 00:14:45.565 12:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:45.823 12:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:45.823 pt2 00:14:45.823 pt3' 00:14:45.823 12:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:45.823 12:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:45.823 12:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:45.823 12:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:45.823 12:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.823 12:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:45.823 12:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.823 12:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.823 12:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:45.823 12:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:45.823 12:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:45.823 12:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:45.823 12:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:45.823 12:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.823 12:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.823 12:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.823 12:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:45.823 12:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:45.823 12:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:45.823 12:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:45.823 12:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:45.823 12:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.823 12:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.823 12:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.081 12:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:46.081 12:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:46.081 12:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:46.081 12:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:46.081 12:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.081 12:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.081 [2024-11-25 12:13:41.925205] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:46.081 12:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.081 12:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 6c7ceab7-387e-4a25-812a-c32c8ba6c82a '!=' 6c7ceab7-387e-4a25-812a-c32c8ba6c82a ']' 00:14:46.081 12:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:14:46.081 12:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:46.082 12:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:46.082 12:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:14:46.082 12:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.082 12:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.082 [2024-11-25 12:13:41.972936] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:46.082 12:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.082 12:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:46.082 12:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:46.082 12:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:46.082 12:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:46.082 12:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:46.082 12:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:46.082 12:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.082 12:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.082 12:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.082 12:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.082 12:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.082 12:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.082 12:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.082 12:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.082 12:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.082 12:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.082 "name": "raid_bdev1", 00:14:46.082 "uuid": "6c7ceab7-387e-4a25-812a-c32c8ba6c82a", 00:14:46.082 "strip_size_kb": 0, 00:14:46.082 "state": "online", 00:14:46.082 "raid_level": "raid1", 00:14:46.082 "superblock": true, 00:14:46.082 "num_base_bdevs": 3, 00:14:46.082 "num_base_bdevs_discovered": 2, 00:14:46.082 "num_base_bdevs_operational": 2, 00:14:46.082 "base_bdevs_list": [ 00:14:46.082 { 00:14:46.082 "name": null, 00:14:46.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.082 "is_configured": false, 00:14:46.082 "data_offset": 0, 00:14:46.082 "data_size": 63488 00:14:46.082 }, 00:14:46.082 { 00:14:46.082 "name": "pt2", 00:14:46.082 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:46.082 "is_configured": true, 00:14:46.082 "data_offset": 2048, 00:14:46.082 "data_size": 63488 00:14:46.082 }, 00:14:46.082 { 00:14:46.082 "name": "pt3", 00:14:46.082 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:46.082 "is_configured": true, 00:14:46.082 "data_offset": 2048, 00:14:46.082 "data_size": 63488 00:14:46.082 } 00:14:46.082 ] 00:14:46.082 }' 00:14:46.082 12:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.082 12:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.649 12:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:46.649 12:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.649 12:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.649 [2024-11-25 12:13:42.477055] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:46.649 [2024-11-25 12:13:42.477095] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:46.649 [2024-11-25 12:13:42.477194] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:46.649 [2024-11-25 12:13:42.477277] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:46.649 [2024-11-25 12:13:42.477302] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:46.649 12:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.649 12:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.649 12:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.649 12:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.649 12:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:14:46.649 12:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.649 12:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:14:46.649 12:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:14:46.649 12:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:14:46.649 12:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:46.649 12:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:14:46.649 12:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.649 12:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.649 12:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.649 12:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:46.649 12:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:46.649 12:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:14:46.649 12:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.649 12:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.649 12:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.649 12:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:46.649 12:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:46.649 12:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:14:46.649 12:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:46.649 12:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:46.649 12:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.649 12:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.649 [2024-11-25 12:13:42.561024] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:46.649 [2024-11-25 12:13:42.561109] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:46.649 [2024-11-25 12:13:42.561138] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:14:46.649 [2024-11-25 12:13:42.561155] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:46.649 [2024-11-25 12:13:42.563965] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:46.649 [2024-11-25 12:13:42.564017] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:46.649 [2024-11-25 12:13:42.564110] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:46.649 [2024-11-25 12:13:42.564172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:46.649 pt2 00:14:46.649 12:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.649 12:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:14:46.649 12:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:46.649 12:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:46.649 12:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:46.649 12:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:46.649 12:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:46.649 12:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.649 12:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.649 12:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.649 12:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.649 12:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.649 12:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.649 12:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.649 12:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.649 12:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.649 12:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.649 "name": "raid_bdev1", 00:14:46.649 "uuid": "6c7ceab7-387e-4a25-812a-c32c8ba6c82a", 00:14:46.649 "strip_size_kb": 0, 00:14:46.649 "state": "configuring", 00:14:46.649 "raid_level": "raid1", 00:14:46.649 "superblock": true, 00:14:46.649 "num_base_bdevs": 3, 00:14:46.649 "num_base_bdevs_discovered": 1, 00:14:46.649 "num_base_bdevs_operational": 2, 00:14:46.649 "base_bdevs_list": [ 00:14:46.649 { 00:14:46.649 "name": null, 00:14:46.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.649 "is_configured": false, 00:14:46.649 "data_offset": 2048, 00:14:46.649 "data_size": 63488 00:14:46.649 }, 00:14:46.649 { 00:14:46.649 "name": "pt2", 00:14:46.649 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:46.649 "is_configured": true, 00:14:46.649 "data_offset": 2048, 00:14:46.649 "data_size": 63488 00:14:46.649 }, 00:14:46.649 { 00:14:46.649 "name": null, 00:14:46.649 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:46.649 "is_configured": false, 00:14:46.649 "data_offset": 2048, 00:14:46.649 "data_size": 63488 00:14:46.649 } 00:14:46.649 ] 00:14:46.649 }' 00:14:46.649 12:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.649 12:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.216 12:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:47.216 12:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:47.216 12:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:14:47.216 12:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:47.216 12:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.216 12:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.216 [2024-11-25 12:13:43.101218] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:47.216 [2024-11-25 12:13:43.101297] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:47.216 [2024-11-25 12:13:43.101328] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:47.216 [2024-11-25 12:13:43.101378] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:47.216 [2024-11-25 12:13:43.101938] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:47.216 [2024-11-25 12:13:43.101977] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:47.216 [2024-11-25 12:13:43.102117] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:47.216 [2024-11-25 12:13:43.102161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:47.216 [2024-11-25 12:13:43.102307] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:47.216 [2024-11-25 12:13:43.102329] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:47.216 [2024-11-25 12:13:43.102692] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:47.216 [2024-11-25 12:13:43.102939] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:47.216 [2024-11-25 12:13:43.102961] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:14:47.216 [2024-11-25 12:13:43.103148] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:47.216 pt3 00:14:47.216 12:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.216 12:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:47.216 12:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:47.216 12:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:47.216 12:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:47.216 12:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:47.216 12:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:47.216 12:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.216 12:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.216 12:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.216 12:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.216 12:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.216 12:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.216 12:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.216 12:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.216 12:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.216 12:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.216 "name": "raid_bdev1", 00:14:47.216 "uuid": "6c7ceab7-387e-4a25-812a-c32c8ba6c82a", 00:14:47.216 "strip_size_kb": 0, 00:14:47.216 "state": "online", 00:14:47.216 "raid_level": "raid1", 00:14:47.216 "superblock": true, 00:14:47.216 "num_base_bdevs": 3, 00:14:47.216 "num_base_bdevs_discovered": 2, 00:14:47.216 "num_base_bdevs_operational": 2, 00:14:47.216 "base_bdevs_list": [ 00:14:47.216 { 00:14:47.216 "name": null, 00:14:47.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.216 "is_configured": false, 00:14:47.216 "data_offset": 2048, 00:14:47.216 "data_size": 63488 00:14:47.216 }, 00:14:47.216 { 00:14:47.216 "name": "pt2", 00:14:47.216 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:47.216 "is_configured": true, 00:14:47.216 "data_offset": 2048, 00:14:47.216 "data_size": 63488 00:14:47.216 }, 00:14:47.216 { 00:14:47.216 "name": "pt3", 00:14:47.216 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:47.216 "is_configured": true, 00:14:47.216 "data_offset": 2048, 00:14:47.216 "data_size": 63488 00:14:47.216 } 00:14:47.216 ] 00:14:47.216 }' 00:14:47.216 12:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.216 12:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.782 12:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:47.782 12:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.782 12:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.782 [2024-11-25 12:13:43.613328] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:47.782 [2024-11-25 12:13:43.613380] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:47.782 [2024-11-25 12:13:43.613488] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:47.782 [2024-11-25 12:13:43.613577] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:47.782 [2024-11-25 12:13:43.613601] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:14:47.782 12:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.782 12:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.782 12:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.782 12:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.782 12:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:14:47.782 12:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.782 12:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:14:47.782 12:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:14:47.782 12:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:14:47.782 12:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:14:47.782 12:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:14:47.782 12:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.782 12:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.782 12:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.782 12:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:47.782 12:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.782 12:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.782 [2024-11-25 12:13:43.685370] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:47.782 [2024-11-25 12:13:43.685436] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:47.782 [2024-11-25 12:13:43.685464] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:47.782 [2024-11-25 12:13:43.685478] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:47.782 [2024-11-25 12:13:43.688474] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:47.782 [2024-11-25 12:13:43.688520] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:47.782 [2024-11-25 12:13:43.688622] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:47.782 [2024-11-25 12:13:43.688680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:47.782 [2024-11-25 12:13:43.688878] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:47.782 [2024-11-25 12:13:43.688896] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:47.782 [2024-11-25 12:13:43.688921] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:14:47.782 [2024-11-25 12:13:43.688991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:47.782 pt1 00:14:47.782 12:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.782 12:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:14:47.782 12:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:14:47.782 12:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:47.782 12:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:47.782 12:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:47.782 12:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:47.782 12:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:47.782 12:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.782 12:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.782 12:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.782 12:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.782 12:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.782 12:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.782 12:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.782 12:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.782 12:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.782 12:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.782 "name": "raid_bdev1", 00:14:47.782 "uuid": "6c7ceab7-387e-4a25-812a-c32c8ba6c82a", 00:14:47.782 "strip_size_kb": 0, 00:14:47.782 "state": "configuring", 00:14:47.782 "raid_level": "raid1", 00:14:47.782 "superblock": true, 00:14:47.782 "num_base_bdevs": 3, 00:14:47.782 "num_base_bdevs_discovered": 1, 00:14:47.782 "num_base_bdevs_operational": 2, 00:14:47.782 "base_bdevs_list": [ 00:14:47.782 { 00:14:47.782 "name": null, 00:14:47.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.782 "is_configured": false, 00:14:47.782 "data_offset": 2048, 00:14:47.782 "data_size": 63488 00:14:47.782 }, 00:14:47.782 { 00:14:47.782 "name": "pt2", 00:14:47.782 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:47.782 "is_configured": true, 00:14:47.782 "data_offset": 2048, 00:14:47.782 "data_size": 63488 00:14:47.782 }, 00:14:47.782 { 00:14:47.782 "name": null, 00:14:47.782 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:47.782 "is_configured": false, 00:14:47.782 "data_offset": 2048, 00:14:47.782 "data_size": 63488 00:14:47.782 } 00:14:47.782 ] 00:14:47.782 }' 00:14:47.782 12:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.782 12:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.347 12:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:14:48.347 12:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.347 12:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.347 12:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:48.347 12:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.347 12:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:14:48.347 12:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:48.347 12:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.347 12:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.347 [2024-11-25 12:13:44.265529] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:48.347 [2024-11-25 12:13:44.265603] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:48.347 [2024-11-25 12:13:44.265636] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:14:48.347 [2024-11-25 12:13:44.265651] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:48.347 [2024-11-25 12:13:44.266268] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:48.347 [2024-11-25 12:13:44.266294] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:48.347 [2024-11-25 12:13:44.266408] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:48.347 [2024-11-25 12:13:44.266470] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:48.347 [2024-11-25 12:13:44.266631] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:14:48.347 [2024-11-25 12:13:44.266648] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:48.347 [2024-11-25 12:13:44.266978] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:48.347 [2024-11-25 12:13:44.267177] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:14:48.347 [2024-11-25 12:13:44.267197] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:14:48.347 [2024-11-25 12:13:44.267393] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:48.347 pt3 00:14:48.347 12:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.347 12:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:48.347 12:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:48.347 12:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:48.347 12:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:48.347 12:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:48.347 12:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:48.347 12:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.348 12:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.348 12:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.348 12:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.348 12:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.348 12:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.348 12:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.348 12:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.348 12:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.348 12:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.348 "name": "raid_bdev1", 00:14:48.348 "uuid": "6c7ceab7-387e-4a25-812a-c32c8ba6c82a", 00:14:48.348 "strip_size_kb": 0, 00:14:48.348 "state": "online", 00:14:48.348 "raid_level": "raid1", 00:14:48.348 "superblock": true, 00:14:48.348 "num_base_bdevs": 3, 00:14:48.348 "num_base_bdevs_discovered": 2, 00:14:48.348 "num_base_bdevs_operational": 2, 00:14:48.348 "base_bdevs_list": [ 00:14:48.348 { 00:14:48.348 "name": null, 00:14:48.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.348 "is_configured": false, 00:14:48.348 "data_offset": 2048, 00:14:48.348 "data_size": 63488 00:14:48.348 }, 00:14:48.348 { 00:14:48.348 "name": "pt2", 00:14:48.348 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:48.348 "is_configured": true, 00:14:48.348 "data_offset": 2048, 00:14:48.348 "data_size": 63488 00:14:48.348 }, 00:14:48.348 { 00:14:48.348 "name": "pt3", 00:14:48.348 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:48.348 "is_configured": true, 00:14:48.348 "data_offset": 2048, 00:14:48.348 "data_size": 63488 00:14:48.348 } 00:14:48.348 ] 00:14:48.348 }' 00:14:48.348 12:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.348 12:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.003 12:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:14:49.003 12:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.003 12:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.003 12:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:49.003 12:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.003 12:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:14:49.003 12:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:49.003 12:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:14:49.003 12:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.003 12:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.003 [2024-11-25 12:13:44.834044] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:49.003 12:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.003 12:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 6c7ceab7-387e-4a25-812a-c32c8ba6c82a '!=' 6c7ceab7-387e-4a25-812a-c32c8ba6c82a ']' 00:14:49.003 12:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68728 00:14:49.003 12:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 68728 ']' 00:14:49.003 12:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 68728 00:14:49.003 12:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:14:49.003 12:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:49.003 12:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68728 00:14:49.003 killing process with pid 68728 00:14:49.003 12:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:49.003 12:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:49.003 12:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68728' 00:14:49.003 12:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 68728 00:14:49.003 [2024-11-25 12:13:44.911421] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:49.003 12:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 68728 00:14:49.003 [2024-11-25 12:13:44.911563] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:49.003 [2024-11-25 12:13:44.911648] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:49.003 [2024-11-25 12:13:44.911670] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:14:49.260 [2024-11-25 12:13:45.183681] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:50.196 12:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:50.196 00:14:50.196 real 0m8.425s 00:14:50.196 user 0m13.754s 00:14:50.196 sys 0m1.179s 00:14:50.196 12:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:50.196 12:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.196 ************************************ 00:14:50.196 END TEST raid_superblock_test 00:14:50.196 ************************************ 00:14:50.196 12:13:46 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:14:50.196 12:13:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:50.196 12:13:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:50.196 12:13:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:50.196 ************************************ 00:14:50.196 START TEST raid_read_error_test 00:14:50.196 ************************************ 00:14:50.196 12:13:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:14:50.196 12:13:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:14:50.196 12:13:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:14:50.196 12:13:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:14:50.196 12:13:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:50.196 12:13:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:50.196 12:13:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:50.196 12:13:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:50.196 12:13:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:50.196 12:13:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:50.196 12:13:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:50.196 12:13:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:50.196 12:13:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:50.196 12:13:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:50.196 12:13:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:50.196 12:13:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:50.196 12:13:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:50.196 12:13:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:50.196 12:13:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:50.196 12:13:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:50.196 12:13:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:50.196 12:13:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:50.196 12:13:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:14:50.196 12:13:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:14:50.196 12:13:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:50.196 12:13:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.23qn6nKQjF 00:14:50.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:50.196 12:13:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69180 00:14:50.196 12:13:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:50.196 12:13:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69180 00:14:50.196 12:13:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 69180 ']' 00:14:50.196 12:13:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:50.196 12:13:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:50.196 12:13:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:50.196 12:13:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:50.196 12:13:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.455 [2024-11-25 12:13:46.390140] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:14:50.455 [2024-11-25 12:13:46.391118] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69180 ] 00:14:50.713 [2024-11-25 12:13:46.607457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:50.713 [2024-11-25 12:13:46.771883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.972 [2024-11-25 12:13:46.990257] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:50.972 [2024-11-25 12:13:46.990330] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:51.540 12:13:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:51.540 12:13:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:14:51.540 12:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:51.540 12:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:51.540 12:13:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.540 12:13:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.540 BaseBdev1_malloc 00:14:51.540 12:13:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.540 12:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:51.540 12:13:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.540 12:13:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.540 true 00:14:51.540 12:13:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.540 12:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:51.540 12:13:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.540 12:13:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.540 [2024-11-25 12:13:47.434061] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:51.540 [2024-11-25 12:13:47.434129] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:51.540 [2024-11-25 12:13:47.434164] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:51.540 [2024-11-25 12:13:47.434182] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:51.540 [2024-11-25 12:13:47.437014] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:51.540 [2024-11-25 12:13:47.437065] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:51.540 BaseBdev1 00:14:51.540 12:13:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.540 12:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:51.540 12:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:51.540 12:13:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.540 12:13:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.540 BaseBdev2_malloc 00:14:51.540 12:13:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.540 12:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:51.540 12:13:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.540 12:13:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.540 true 00:14:51.540 12:13:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.540 12:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:51.540 12:13:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.540 12:13:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.540 [2024-11-25 12:13:47.496892] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:51.540 [2024-11-25 12:13:47.496961] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:51.540 [2024-11-25 12:13:47.496991] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:51.540 [2024-11-25 12:13:47.497009] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:51.540 [2024-11-25 12:13:47.499816] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:51.540 [2024-11-25 12:13:47.499864] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:51.540 BaseBdev2 00:14:51.540 12:13:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.540 12:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:51.540 12:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:51.540 12:13:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.540 12:13:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.540 BaseBdev3_malloc 00:14:51.540 12:13:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.540 12:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:51.540 12:13:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.540 12:13:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.540 true 00:14:51.540 12:13:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.540 12:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:51.540 12:13:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.540 12:13:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.540 [2024-11-25 12:13:47.570198] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:51.540 [2024-11-25 12:13:47.570416] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:51.540 [2024-11-25 12:13:47.570458] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:51.540 [2024-11-25 12:13:47.570478] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:51.540 [2024-11-25 12:13:47.573373] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:51.540 [2024-11-25 12:13:47.573426] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:51.540 BaseBdev3 00:14:51.540 12:13:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.540 12:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:14:51.540 12:13:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.540 12:13:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.540 [2024-11-25 12:13:47.578290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:51.540 [2024-11-25 12:13:47.580873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:51.540 [2024-11-25 12:13:47.581117] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:51.540 [2024-11-25 12:13:47.581472] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:51.540 [2024-11-25 12:13:47.581494] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:51.540 [2024-11-25 12:13:47.581827] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:14:51.540 [2024-11-25 12:13:47.582107] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:51.540 [2024-11-25 12:13:47.582127] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:14:51.540 [2024-11-25 12:13:47.582421] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:51.540 12:13:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.540 12:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:51.540 12:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:51.540 12:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:51.540 12:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:51.540 12:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:51.540 12:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:51.540 12:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.540 12:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.540 12:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.540 12:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.540 12:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.540 12:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.540 12:13:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.540 12:13:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.540 12:13:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.799 12:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.799 "name": "raid_bdev1", 00:14:51.799 "uuid": "5b8ea06a-1a1a-4455-a34d-68f561506d02", 00:14:51.799 "strip_size_kb": 0, 00:14:51.799 "state": "online", 00:14:51.799 "raid_level": "raid1", 00:14:51.799 "superblock": true, 00:14:51.799 "num_base_bdevs": 3, 00:14:51.799 "num_base_bdevs_discovered": 3, 00:14:51.799 "num_base_bdevs_operational": 3, 00:14:51.799 "base_bdevs_list": [ 00:14:51.799 { 00:14:51.799 "name": "BaseBdev1", 00:14:51.799 "uuid": "b5cf5f5e-8c14-5ccd-9446-c7b8bd470712", 00:14:51.799 "is_configured": true, 00:14:51.799 "data_offset": 2048, 00:14:51.799 "data_size": 63488 00:14:51.799 }, 00:14:51.799 { 00:14:51.799 "name": "BaseBdev2", 00:14:51.799 "uuid": "d0d88bb5-90a2-552c-af47-f63dab9249fe", 00:14:51.799 "is_configured": true, 00:14:51.799 "data_offset": 2048, 00:14:51.799 "data_size": 63488 00:14:51.799 }, 00:14:51.799 { 00:14:51.799 "name": "BaseBdev3", 00:14:51.799 "uuid": "4e6dc7e6-60d5-5b65-b1a7-86bc815e8b20", 00:14:51.799 "is_configured": true, 00:14:51.799 "data_offset": 2048, 00:14:51.799 "data_size": 63488 00:14:51.799 } 00:14:51.799 ] 00:14:51.799 }' 00:14:51.799 12:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.799 12:13:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.057 12:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:52.057 12:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:52.318 [2024-11-25 12:13:48.259910] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:14:53.256 12:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:14:53.256 12:13:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.256 12:13:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.256 12:13:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.256 12:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:53.256 12:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:14:53.256 12:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:14:53.256 12:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:14:53.256 12:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:53.256 12:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:53.256 12:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:53.256 12:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:53.256 12:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:53.256 12:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:53.256 12:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.256 12:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.256 12:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.256 12:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.256 12:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.256 12:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.256 12:13:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.256 12:13:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.256 12:13:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.256 12:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.256 "name": "raid_bdev1", 00:14:53.256 "uuid": "5b8ea06a-1a1a-4455-a34d-68f561506d02", 00:14:53.256 "strip_size_kb": 0, 00:14:53.256 "state": "online", 00:14:53.256 "raid_level": "raid1", 00:14:53.256 "superblock": true, 00:14:53.256 "num_base_bdevs": 3, 00:14:53.256 "num_base_bdevs_discovered": 3, 00:14:53.256 "num_base_bdevs_operational": 3, 00:14:53.256 "base_bdevs_list": [ 00:14:53.256 { 00:14:53.256 "name": "BaseBdev1", 00:14:53.256 "uuid": "b5cf5f5e-8c14-5ccd-9446-c7b8bd470712", 00:14:53.256 "is_configured": true, 00:14:53.256 "data_offset": 2048, 00:14:53.256 "data_size": 63488 00:14:53.256 }, 00:14:53.256 { 00:14:53.256 "name": "BaseBdev2", 00:14:53.256 "uuid": "d0d88bb5-90a2-552c-af47-f63dab9249fe", 00:14:53.256 "is_configured": true, 00:14:53.256 "data_offset": 2048, 00:14:53.256 "data_size": 63488 00:14:53.256 }, 00:14:53.256 { 00:14:53.256 "name": "BaseBdev3", 00:14:53.256 "uuid": "4e6dc7e6-60d5-5b65-b1a7-86bc815e8b20", 00:14:53.256 "is_configured": true, 00:14:53.256 "data_offset": 2048, 00:14:53.256 "data_size": 63488 00:14:53.256 } 00:14:53.256 ] 00:14:53.256 }' 00:14:53.256 12:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.256 12:13:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.824 12:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:53.824 12:13:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.824 12:13:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.824 [2024-11-25 12:13:49.651059] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:53.824 [2024-11-25 12:13:49.651240] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:53.824 [2024-11-25 12:13:49.654857] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:53.824 [2024-11-25 12:13:49.654919] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:53.824 [2024-11-25 12:13:49.655112] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:53.824 [2024-11-25 12:13:49.655131] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:14:53.824 { 00:14:53.824 "results": [ 00:14:53.824 { 00:14:53.824 "job": "raid_bdev1", 00:14:53.824 "core_mask": "0x1", 00:14:53.824 "workload": "randrw", 00:14:53.824 "percentage": 50, 00:14:53.824 "status": "finished", 00:14:53.824 "queue_depth": 1, 00:14:53.824 "io_size": 131072, 00:14:53.824 "runtime": 1.388805, 00:14:53.824 "iops": 9759.469471956107, 00:14:53.824 "mibps": 1219.9336839945133, 00:14:53.824 "io_failed": 0, 00:14:53.824 "io_timeout": 0, 00:14:53.824 "avg_latency_us": 98.41181965739734, 00:14:53.824 "min_latency_us": 41.89090909090909, 00:14:53.824 "max_latency_us": 1817.1345454545456 00:14:53.824 } 00:14:53.824 ], 00:14:53.824 "core_count": 1 00:14:53.824 } 00:14:53.824 12:13:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.824 12:13:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69180 00:14:53.824 12:13:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 69180 ']' 00:14:53.824 12:13:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 69180 00:14:53.824 12:13:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:14:53.824 12:13:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:53.824 12:13:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69180 00:14:53.824 killing process with pid 69180 00:14:53.824 12:13:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:53.824 12:13:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:53.824 12:13:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69180' 00:14:53.824 12:13:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 69180 00:14:53.824 [2024-11-25 12:13:49.693100] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:53.824 12:13:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 69180 00:14:53.824 [2024-11-25 12:13:49.895999] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:55.198 12:13:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.23qn6nKQjF 00:14:55.198 12:13:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:55.198 12:13:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:55.198 12:13:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:14:55.198 12:13:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:14:55.198 12:13:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:55.198 12:13:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:55.198 12:13:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:14:55.198 00:14:55.198 real 0m4.731s 00:14:55.198 user 0m5.866s 00:14:55.198 sys 0m0.607s 00:14:55.198 12:13:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:55.198 12:13:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.198 ************************************ 00:14:55.198 END TEST raid_read_error_test 00:14:55.198 ************************************ 00:14:55.198 12:13:51 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:14:55.198 12:13:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:55.198 12:13:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:55.198 12:13:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:55.198 ************************************ 00:14:55.198 START TEST raid_write_error_test 00:14:55.198 ************************************ 00:14:55.198 12:13:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:14:55.198 12:13:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:14:55.198 12:13:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:14:55.198 12:13:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:14:55.198 12:13:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:55.198 12:13:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:55.198 12:13:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:55.198 12:13:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:55.198 12:13:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:55.198 12:13:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:55.198 12:13:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:55.198 12:13:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:55.198 12:13:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:55.198 12:13:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:55.198 12:13:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:55.198 12:13:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:55.198 12:13:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:55.198 12:13:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:55.198 12:13:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:55.198 12:13:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:55.198 12:13:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:55.198 12:13:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:55.198 12:13:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:14:55.198 12:13:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:14:55.198 12:13:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:55.198 12:13:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.QuGt0euTAv 00:14:55.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:55.198 12:13:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69326 00:14:55.198 12:13:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:55.198 12:13:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69326 00:14:55.198 12:13:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 69326 ']' 00:14:55.198 12:13:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:55.198 12:13:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:55.198 12:13:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:55.198 12:13:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:55.198 12:13:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.198 [2024-11-25 12:13:51.162220] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:14:55.198 [2024-11-25 12:13:51.162764] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69326 ] 00:14:55.455 [2024-11-25 12:13:51.367906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.455 [2024-11-25 12:13:51.494243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:55.713 [2024-11-25 12:13:51.696741] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:55.713 [2024-11-25 12:13:51.696999] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:56.346 12:13:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:56.346 12:13:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:14:56.346 12:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:56.346 12:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:56.346 12:13:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.346 12:13:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.346 BaseBdev1_malloc 00:14:56.346 12:13:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.346 12:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:56.346 12:13:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.346 12:13:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.346 true 00:14:56.346 12:13:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.346 12:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:56.346 12:13:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.346 12:13:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.346 [2024-11-25 12:13:52.220184] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:56.346 [2024-11-25 12:13:52.220252] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:56.346 [2024-11-25 12:13:52.220285] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:56.346 [2024-11-25 12:13:52.220303] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:56.346 [2024-11-25 12:13:52.223140] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:56.346 [2024-11-25 12:13:52.223333] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:56.346 BaseBdev1 00:14:56.346 12:13:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.346 12:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:56.346 12:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:56.346 12:13:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.346 12:13:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.346 BaseBdev2_malloc 00:14:56.346 12:13:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.346 12:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:56.346 12:13:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.346 12:13:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.346 true 00:14:56.346 12:13:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.346 12:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:56.346 12:13:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.346 12:13:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.346 [2024-11-25 12:13:52.284230] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:56.346 [2024-11-25 12:13:52.284300] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:56.346 [2024-11-25 12:13:52.284330] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:56.346 [2024-11-25 12:13:52.284376] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:56.346 [2024-11-25 12:13:52.287148] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:56.346 [2024-11-25 12:13:52.287198] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:56.346 BaseBdev2 00:14:56.346 12:13:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.346 12:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:56.346 12:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:56.347 12:13:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.347 12:13:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.347 BaseBdev3_malloc 00:14:56.347 12:13:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.347 12:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:56.347 12:13:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.347 12:13:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.347 true 00:14:56.347 12:13:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.347 12:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:56.347 12:13:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.347 12:13:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.347 [2024-11-25 12:13:52.350929] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:56.347 [2024-11-25 12:13:52.351120] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:56.347 [2024-11-25 12:13:52.351161] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:56.347 [2024-11-25 12:13:52.351181] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:56.347 [2024-11-25 12:13:52.354073] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:56.347 [2024-11-25 12:13:52.354122] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:56.347 BaseBdev3 00:14:56.347 12:13:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.347 12:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:14:56.347 12:13:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.347 12:13:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.347 [2024-11-25 12:13:52.359067] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:56.347 [2024-11-25 12:13:52.361487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:56.347 [2024-11-25 12:13:52.361711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:56.347 [2024-11-25 12:13:52.361993] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:56.347 [2024-11-25 12:13:52.362012] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:56.347 [2024-11-25 12:13:52.362331] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:14:56.347 [2024-11-25 12:13:52.362585] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:56.347 [2024-11-25 12:13:52.362606] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:14:56.347 [2024-11-25 12:13:52.362787] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:56.347 12:13:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.347 12:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:56.347 12:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:56.347 12:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:56.347 12:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:56.347 12:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:56.347 12:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:56.347 12:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.347 12:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.347 12:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.347 12:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.347 12:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.347 12:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.347 12:13:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.347 12:13:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.347 12:13:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.630 12:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.630 "name": "raid_bdev1", 00:14:56.630 "uuid": "8195a277-309c-4d15-88d9-0a2599361876", 00:14:56.630 "strip_size_kb": 0, 00:14:56.630 "state": "online", 00:14:56.630 "raid_level": "raid1", 00:14:56.630 "superblock": true, 00:14:56.630 "num_base_bdevs": 3, 00:14:56.630 "num_base_bdevs_discovered": 3, 00:14:56.630 "num_base_bdevs_operational": 3, 00:14:56.630 "base_bdevs_list": [ 00:14:56.630 { 00:14:56.630 "name": "BaseBdev1", 00:14:56.630 "uuid": "1cd0cac3-d673-5190-a177-c5462472687b", 00:14:56.630 "is_configured": true, 00:14:56.630 "data_offset": 2048, 00:14:56.630 "data_size": 63488 00:14:56.630 }, 00:14:56.630 { 00:14:56.630 "name": "BaseBdev2", 00:14:56.630 "uuid": "84b7594a-0add-5079-ab03-715e40b9e937", 00:14:56.630 "is_configured": true, 00:14:56.630 "data_offset": 2048, 00:14:56.630 "data_size": 63488 00:14:56.630 }, 00:14:56.630 { 00:14:56.630 "name": "BaseBdev3", 00:14:56.630 "uuid": "486f59a9-2af8-5a1f-95fa-6f3d7e931a3d", 00:14:56.630 "is_configured": true, 00:14:56.630 "data_offset": 2048, 00:14:56.630 "data_size": 63488 00:14:56.630 } 00:14:56.630 ] 00:14:56.630 }' 00:14:56.630 12:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.630 12:13:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.888 12:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:56.888 12:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:57.147 [2024-11-25 12:13:52.992596] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:14:58.083 12:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:58.083 12:13:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.083 12:13:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.083 [2024-11-25 12:13:53.876776] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:14:58.083 [2024-11-25 12:13:53.876845] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:58.083 [2024-11-25 12:13:53.877096] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:14:58.083 12:13:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.083 12:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:58.083 12:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:14:58.083 12:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:14:58.083 12:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:14:58.083 12:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:58.083 12:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:58.084 12:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:58.084 12:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:58.084 12:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:58.084 12:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:58.084 12:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.084 12:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.084 12:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.084 12:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.084 12:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.084 12:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.084 12:13:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.084 12:13:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.084 12:13:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.084 12:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.084 "name": "raid_bdev1", 00:14:58.084 "uuid": "8195a277-309c-4d15-88d9-0a2599361876", 00:14:58.084 "strip_size_kb": 0, 00:14:58.084 "state": "online", 00:14:58.084 "raid_level": "raid1", 00:14:58.084 "superblock": true, 00:14:58.084 "num_base_bdevs": 3, 00:14:58.084 "num_base_bdevs_discovered": 2, 00:14:58.084 "num_base_bdevs_operational": 2, 00:14:58.084 "base_bdevs_list": [ 00:14:58.084 { 00:14:58.084 "name": null, 00:14:58.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.084 "is_configured": false, 00:14:58.084 "data_offset": 0, 00:14:58.084 "data_size": 63488 00:14:58.084 }, 00:14:58.084 { 00:14:58.084 "name": "BaseBdev2", 00:14:58.084 "uuid": "84b7594a-0add-5079-ab03-715e40b9e937", 00:14:58.084 "is_configured": true, 00:14:58.084 "data_offset": 2048, 00:14:58.084 "data_size": 63488 00:14:58.084 }, 00:14:58.084 { 00:14:58.084 "name": "BaseBdev3", 00:14:58.084 "uuid": "486f59a9-2af8-5a1f-95fa-6f3d7e931a3d", 00:14:58.084 "is_configured": true, 00:14:58.084 "data_offset": 2048, 00:14:58.084 "data_size": 63488 00:14:58.084 } 00:14:58.084 ] 00:14:58.084 }' 00:14:58.084 12:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.084 12:13:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.343 12:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:58.343 12:13:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.343 12:13:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.343 [2024-11-25 12:13:54.421594] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:58.343 [2024-11-25 12:13:54.421773] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:58.343 [2024-11-25 12:13:54.425193] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:58.343 [2024-11-25 12:13:54.425398] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:58.343 [2024-11-25 12:13:54.425608] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:58.343 [2024-11-25 12:13:54.425770] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:14:58.343 { 00:14:58.343 "results": [ 00:14:58.343 { 00:14:58.343 "job": "raid_bdev1", 00:14:58.343 "core_mask": "0x1", 00:14:58.343 "workload": "randrw", 00:14:58.343 "percentage": 50, 00:14:58.343 "status": "finished", 00:14:58.343 "queue_depth": 1, 00:14:58.343 "io_size": 131072, 00:14:58.343 "runtime": 1.426636, 00:14:58.343 "iops": 10814.251147454572, 00:14:58.343 "mibps": 1351.7813934318215, 00:14:58.343 "io_failed": 0, 00:14:58.343 "io_timeout": 0, 00:14:58.343 "avg_latency_us": 88.34272515143658, 00:14:58.343 "min_latency_us": 42.82181818181818, 00:14:58.343 "max_latency_us": 1846.9236363636364 00:14:58.343 } 00:14:58.343 ], 00:14:58.343 "core_count": 1 00:14:58.343 } 00:14:58.343 12:13:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.343 12:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69326 00:14:58.343 12:13:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 69326 ']' 00:14:58.343 12:13:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 69326 00:14:58.602 12:13:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:14:58.602 12:13:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:58.602 12:13:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69326 00:14:58.602 killing process with pid 69326 00:14:58.602 12:13:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:58.602 12:13:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:58.602 12:13:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69326' 00:14:58.602 12:13:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 69326 00:14:58.602 [2024-11-25 12:13:54.468190] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:58.602 12:13:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 69326 00:14:58.602 [2024-11-25 12:13:54.674442] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:59.980 12:13:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.QuGt0euTAv 00:14:59.980 12:13:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:59.980 12:13:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:59.980 ************************************ 00:14:59.980 END TEST raid_write_error_test 00:14:59.980 ************************************ 00:14:59.980 12:13:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:14:59.980 12:13:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:14:59.980 12:13:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:59.980 12:13:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:59.980 12:13:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:14:59.980 00:14:59.980 real 0m4.730s 00:14:59.980 user 0m5.887s 00:14:59.980 sys 0m0.591s 00:14:59.980 12:13:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:59.980 12:13:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.980 12:13:55 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:14:59.980 12:13:55 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:14:59.980 12:13:55 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:14:59.980 12:13:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:59.980 12:13:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:59.980 12:13:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:59.980 ************************************ 00:14:59.980 START TEST raid_state_function_test 00:14:59.980 ************************************ 00:14:59.980 12:13:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:14:59.980 12:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:14:59.980 12:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:59.980 12:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:59.980 12:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:59.980 12:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:59.980 12:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:59.980 12:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:59.980 12:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:59.980 12:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:59.980 12:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:59.980 12:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:59.980 12:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:59.981 12:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:59.981 12:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:59.981 12:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:59.981 12:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:59.981 12:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:59.981 12:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:59.981 12:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:59.981 Process raid pid: 69469 00:14:59.981 12:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:59.981 12:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:59.981 12:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:59.981 12:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:59.981 12:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:59.981 12:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:14:59.981 12:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:59.981 12:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:59.981 12:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:59.981 12:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:59.981 12:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69469 00:14:59.981 12:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69469' 00:14:59.981 12:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:59.981 12:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69469 00:14:59.981 12:13:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69469 ']' 00:14:59.981 12:13:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:59.981 12:13:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:59.981 12:13:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:59.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:59.981 12:13:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:59.981 12:13:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.981 [2024-11-25 12:13:55.919807] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:14:59.981 [2024-11-25 12:13:55.920155] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:00.307 [2024-11-25 12:13:56.097147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.307 [2024-11-25 12:13:56.227629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.565 [2024-11-25 12:13:56.463666] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:00.565 [2024-11-25 12:13:56.463905] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:01.131 12:13:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:01.131 12:13:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:15:01.131 12:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:01.131 12:13:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.131 12:13:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.131 [2024-11-25 12:13:56.966420] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:01.131 [2024-11-25 12:13:56.966483] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:01.131 [2024-11-25 12:13:56.966500] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:01.131 [2024-11-25 12:13:56.966516] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:01.131 [2024-11-25 12:13:56.966526] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:01.131 [2024-11-25 12:13:56.966541] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:01.131 [2024-11-25 12:13:56.966551] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:01.131 [2024-11-25 12:13:56.966565] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:01.131 12:13:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.131 12:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:01.131 12:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:01.131 12:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:01.131 12:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:01.131 12:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:01.131 12:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:01.131 12:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.131 12:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.131 12:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.131 12:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.131 12:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.131 12:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.131 12:13:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.131 12:13:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.131 12:13:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.131 12:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.131 "name": "Existed_Raid", 00:15:01.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.131 "strip_size_kb": 64, 00:15:01.131 "state": "configuring", 00:15:01.131 "raid_level": "raid0", 00:15:01.131 "superblock": false, 00:15:01.131 "num_base_bdevs": 4, 00:15:01.131 "num_base_bdevs_discovered": 0, 00:15:01.131 "num_base_bdevs_operational": 4, 00:15:01.131 "base_bdevs_list": [ 00:15:01.131 { 00:15:01.131 "name": "BaseBdev1", 00:15:01.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.131 "is_configured": false, 00:15:01.131 "data_offset": 0, 00:15:01.131 "data_size": 0 00:15:01.131 }, 00:15:01.131 { 00:15:01.131 "name": "BaseBdev2", 00:15:01.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.131 "is_configured": false, 00:15:01.131 "data_offset": 0, 00:15:01.131 "data_size": 0 00:15:01.131 }, 00:15:01.131 { 00:15:01.131 "name": "BaseBdev3", 00:15:01.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.131 "is_configured": false, 00:15:01.131 "data_offset": 0, 00:15:01.131 "data_size": 0 00:15:01.131 }, 00:15:01.131 { 00:15:01.131 "name": "BaseBdev4", 00:15:01.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.131 "is_configured": false, 00:15:01.131 "data_offset": 0, 00:15:01.131 "data_size": 0 00:15:01.131 } 00:15:01.131 ] 00:15:01.131 }' 00:15:01.131 12:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.131 12:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.727 12:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:01.727 12:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.727 12:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.727 [2024-11-25 12:13:57.514500] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:01.728 [2024-11-25 12:13:57.514559] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:01.728 12:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.728 12:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:01.728 12:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.728 12:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.728 [2024-11-25 12:13:57.526474] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:01.728 [2024-11-25 12:13:57.526660] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:01.728 [2024-11-25 12:13:57.526779] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:01.728 [2024-11-25 12:13:57.526839] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:01.728 [2024-11-25 12:13:57.527042] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:01.728 [2024-11-25 12:13:57.527119] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:01.728 [2024-11-25 12:13:57.527163] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:01.728 [2024-11-25 12:13:57.527382] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:01.728 12:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.728 12:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:01.728 12:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.728 12:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.728 [2024-11-25 12:13:57.571275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:01.728 BaseBdev1 00:15:01.728 12:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.728 12:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:01.728 12:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:01.728 12:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:01.728 12:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:01.728 12:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:01.728 12:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:01.728 12:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:01.728 12:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.728 12:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.728 12:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.728 12:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:01.728 12:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.728 12:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.728 [ 00:15:01.728 { 00:15:01.728 "name": "BaseBdev1", 00:15:01.728 "aliases": [ 00:15:01.728 "f6993fcc-b9d2-4dbb-937c-6b0d117d3648" 00:15:01.728 ], 00:15:01.728 "product_name": "Malloc disk", 00:15:01.728 "block_size": 512, 00:15:01.728 "num_blocks": 65536, 00:15:01.728 "uuid": "f6993fcc-b9d2-4dbb-937c-6b0d117d3648", 00:15:01.728 "assigned_rate_limits": { 00:15:01.728 "rw_ios_per_sec": 0, 00:15:01.728 "rw_mbytes_per_sec": 0, 00:15:01.728 "r_mbytes_per_sec": 0, 00:15:01.728 "w_mbytes_per_sec": 0 00:15:01.728 }, 00:15:01.728 "claimed": true, 00:15:01.728 "claim_type": "exclusive_write", 00:15:01.728 "zoned": false, 00:15:01.728 "supported_io_types": { 00:15:01.728 "read": true, 00:15:01.728 "write": true, 00:15:01.728 "unmap": true, 00:15:01.728 "flush": true, 00:15:01.728 "reset": true, 00:15:01.728 "nvme_admin": false, 00:15:01.728 "nvme_io": false, 00:15:01.728 "nvme_io_md": false, 00:15:01.728 "write_zeroes": true, 00:15:01.728 "zcopy": true, 00:15:01.728 "get_zone_info": false, 00:15:01.728 "zone_management": false, 00:15:01.728 "zone_append": false, 00:15:01.728 "compare": false, 00:15:01.728 "compare_and_write": false, 00:15:01.728 "abort": true, 00:15:01.728 "seek_hole": false, 00:15:01.728 "seek_data": false, 00:15:01.728 "copy": true, 00:15:01.728 "nvme_iov_md": false 00:15:01.728 }, 00:15:01.728 "memory_domains": [ 00:15:01.728 { 00:15:01.728 "dma_device_id": "system", 00:15:01.728 "dma_device_type": 1 00:15:01.728 }, 00:15:01.728 { 00:15:01.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.728 "dma_device_type": 2 00:15:01.728 } 00:15:01.728 ], 00:15:01.728 "driver_specific": {} 00:15:01.728 } 00:15:01.728 ] 00:15:01.728 12:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.728 12:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:01.728 12:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:01.728 12:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:01.728 12:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:01.728 12:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:01.728 12:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:01.728 12:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:01.728 12:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.728 12:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.728 12:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.728 12:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.728 12:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.728 12:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.728 12:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.728 12:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.728 12:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.728 12:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.728 "name": "Existed_Raid", 00:15:01.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.728 "strip_size_kb": 64, 00:15:01.728 "state": "configuring", 00:15:01.728 "raid_level": "raid0", 00:15:01.728 "superblock": false, 00:15:01.728 "num_base_bdevs": 4, 00:15:01.728 "num_base_bdevs_discovered": 1, 00:15:01.728 "num_base_bdevs_operational": 4, 00:15:01.728 "base_bdevs_list": [ 00:15:01.728 { 00:15:01.728 "name": "BaseBdev1", 00:15:01.728 "uuid": "f6993fcc-b9d2-4dbb-937c-6b0d117d3648", 00:15:01.728 "is_configured": true, 00:15:01.728 "data_offset": 0, 00:15:01.728 "data_size": 65536 00:15:01.728 }, 00:15:01.728 { 00:15:01.728 "name": "BaseBdev2", 00:15:01.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.728 "is_configured": false, 00:15:01.728 "data_offset": 0, 00:15:01.728 "data_size": 0 00:15:01.728 }, 00:15:01.728 { 00:15:01.728 "name": "BaseBdev3", 00:15:01.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.728 "is_configured": false, 00:15:01.728 "data_offset": 0, 00:15:01.728 "data_size": 0 00:15:01.728 }, 00:15:01.728 { 00:15:01.728 "name": "BaseBdev4", 00:15:01.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.728 "is_configured": false, 00:15:01.728 "data_offset": 0, 00:15:01.728 "data_size": 0 00:15:01.728 } 00:15:01.728 ] 00:15:01.728 }' 00:15:01.728 12:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.728 12:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.296 12:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:02.296 12:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.296 12:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.296 [2024-11-25 12:13:58.119503] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:02.296 [2024-11-25 12:13:58.119568] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:02.296 12:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.296 12:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:02.296 12:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.296 12:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.296 [2024-11-25 12:13:58.127528] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:02.296 [2024-11-25 12:13:58.130113] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:02.296 [2024-11-25 12:13:58.130167] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:02.296 [2024-11-25 12:13:58.130184] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:02.296 [2024-11-25 12:13:58.130201] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:02.296 [2024-11-25 12:13:58.130211] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:02.296 [2024-11-25 12:13:58.130226] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:02.296 12:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.296 12:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:02.296 12:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:02.296 12:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:02.296 12:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:02.296 12:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:02.296 12:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:02.296 12:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:02.296 12:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:02.296 12:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.296 12:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.296 12:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.296 12:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.296 12:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:02.296 12:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.296 12:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.296 12:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.296 12:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.296 12:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.296 "name": "Existed_Raid", 00:15:02.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.296 "strip_size_kb": 64, 00:15:02.296 "state": "configuring", 00:15:02.296 "raid_level": "raid0", 00:15:02.296 "superblock": false, 00:15:02.296 "num_base_bdevs": 4, 00:15:02.296 "num_base_bdevs_discovered": 1, 00:15:02.296 "num_base_bdevs_operational": 4, 00:15:02.296 "base_bdevs_list": [ 00:15:02.296 { 00:15:02.296 "name": "BaseBdev1", 00:15:02.296 "uuid": "f6993fcc-b9d2-4dbb-937c-6b0d117d3648", 00:15:02.296 "is_configured": true, 00:15:02.296 "data_offset": 0, 00:15:02.296 "data_size": 65536 00:15:02.296 }, 00:15:02.296 { 00:15:02.296 "name": "BaseBdev2", 00:15:02.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.296 "is_configured": false, 00:15:02.296 "data_offset": 0, 00:15:02.296 "data_size": 0 00:15:02.296 }, 00:15:02.296 { 00:15:02.296 "name": "BaseBdev3", 00:15:02.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.296 "is_configured": false, 00:15:02.296 "data_offset": 0, 00:15:02.296 "data_size": 0 00:15:02.296 }, 00:15:02.296 { 00:15:02.296 "name": "BaseBdev4", 00:15:02.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.296 "is_configured": false, 00:15:02.296 "data_offset": 0, 00:15:02.296 "data_size": 0 00:15:02.296 } 00:15:02.296 ] 00:15:02.296 }' 00:15:02.296 12:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.296 12:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.864 12:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:02.864 12:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.864 12:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.864 [2024-11-25 12:13:58.690755] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:02.864 BaseBdev2 00:15:02.864 12:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.864 12:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:02.864 12:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:02.864 12:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:02.864 12:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:02.864 12:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:02.864 12:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:02.864 12:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:02.864 12:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.864 12:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.864 12:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.864 12:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:02.864 12:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.864 12:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.864 [ 00:15:02.864 { 00:15:02.864 "name": "BaseBdev2", 00:15:02.864 "aliases": [ 00:15:02.864 "06b6dc08-d8d3-4850-b4cb-87a4274de5dc" 00:15:02.864 ], 00:15:02.864 "product_name": "Malloc disk", 00:15:02.864 "block_size": 512, 00:15:02.864 "num_blocks": 65536, 00:15:02.864 "uuid": "06b6dc08-d8d3-4850-b4cb-87a4274de5dc", 00:15:02.864 "assigned_rate_limits": { 00:15:02.864 "rw_ios_per_sec": 0, 00:15:02.864 "rw_mbytes_per_sec": 0, 00:15:02.864 "r_mbytes_per_sec": 0, 00:15:02.864 "w_mbytes_per_sec": 0 00:15:02.864 }, 00:15:02.864 "claimed": true, 00:15:02.864 "claim_type": "exclusive_write", 00:15:02.864 "zoned": false, 00:15:02.864 "supported_io_types": { 00:15:02.864 "read": true, 00:15:02.864 "write": true, 00:15:02.864 "unmap": true, 00:15:02.864 "flush": true, 00:15:02.864 "reset": true, 00:15:02.864 "nvme_admin": false, 00:15:02.864 "nvme_io": false, 00:15:02.864 "nvme_io_md": false, 00:15:02.864 "write_zeroes": true, 00:15:02.864 "zcopy": true, 00:15:02.864 "get_zone_info": false, 00:15:02.864 "zone_management": false, 00:15:02.864 "zone_append": false, 00:15:02.864 "compare": false, 00:15:02.864 "compare_and_write": false, 00:15:02.864 "abort": true, 00:15:02.864 "seek_hole": false, 00:15:02.864 "seek_data": false, 00:15:02.864 "copy": true, 00:15:02.864 "nvme_iov_md": false 00:15:02.864 }, 00:15:02.864 "memory_domains": [ 00:15:02.864 { 00:15:02.864 "dma_device_id": "system", 00:15:02.864 "dma_device_type": 1 00:15:02.864 }, 00:15:02.864 { 00:15:02.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.864 "dma_device_type": 2 00:15:02.864 } 00:15:02.864 ], 00:15:02.864 "driver_specific": {} 00:15:02.864 } 00:15:02.864 ] 00:15:02.864 12:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.864 12:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:02.864 12:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:02.864 12:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:02.864 12:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:02.864 12:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:02.864 12:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:02.864 12:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:02.864 12:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:02.864 12:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:02.864 12:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.864 12:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.864 12:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.864 12:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.864 12:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.864 12:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:02.864 12:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.864 12:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.864 12:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.864 12:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.864 "name": "Existed_Raid", 00:15:02.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.864 "strip_size_kb": 64, 00:15:02.864 "state": "configuring", 00:15:02.864 "raid_level": "raid0", 00:15:02.864 "superblock": false, 00:15:02.864 "num_base_bdevs": 4, 00:15:02.864 "num_base_bdevs_discovered": 2, 00:15:02.864 "num_base_bdevs_operational": 4, 00:15:02.864 "base_bdevs_list": [ 00:15:02.864 { 00:15:02.864 "name": "BaseBdev1", 00:15:02.864 "uuid": "f6993fcc-b9d2-4dbb-937c-6b0d117d3648", 00:15:02.864 "is_configured": true, 00:15:02.864 "data_offset": 0, 00:15:02.864 "data_size": 65536 00:15:02.864 }, 00:15:02.864 { 00:15:02.864 "name": "BaseBdev2", 00:15:02.864 "uuid": "06b6dc08-d8d3-4850-b4cb-87a4274de5dc", 00:15:02.864 "is_configured": true, 00:15:02.864 "data_offset": 0, 00:15:02.864 "data_size": 65536 00:15:02.864 }, 00:15:02.864 { 00:15:02.864 "name": "BaseBdev3", 00:15:02.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.864 "is_configured": false, 00:15:02.864 "data_offset": 0, 00:15:02.864 "data_size": 0 00:15:02.864 }, 00:15:02.864 { 00:15:02.864 "name": "BaseBdev4", 00:15:02.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.864 "is_configured": false, 00:15:02.864 "data_offset": 0, 00:15:02.864 "data_size": 0 00:15:02.864 } 00:15:02.864 ] 00:15:02.864 }' 00:15:02.864 12:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.864 12:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.432 12:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:03.432 12:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.432 12:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.432 [2024-11-25 12:13:59.305011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:03.432 BaseBdev3 00:15:03.432 12:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.432 12:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:03.432 12:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:03.432 12:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:03.432 12:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:03.432 12:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:03.432 12:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:03.432 12:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:03.432 12:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.432 12:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.432 12:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.432 12:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:03.432 12:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.432 12:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.432 [ 00:15:03.432 { 00:15:03.432 "name": "BaseBdev3", 00:15:03.432 "aliases": [ 00:15:03.432 "4726ebea-4023-4010-81ff-f83e4638a623" 00:15:03.432 ], 00:15:03.432 "product_name": "Malloc disk", 00:15:03.432 "block_size": 512, 00:15:03.432 "num_blocks": 65536, 00:15:03.432 "uuid": "4726ebea-4023-4010-81ff-f83e4638a623", 00:15:03.432 "assigned_rate_limits": { 00:15:03.432 "rw_ios_per_sec": 0, 00:15:03.432 "rw_mbytes_per_sec": 0, 00:15:03.432 "r_mbytes_per_sec": 0, 00:15:03.432 "w_mbytes_per_sec": 0 00:15:03.432 }, 00:15:03.432 "claimed": true, 00:15:03.432 "claim_type": "exclusive_write", 00:15:03.432 "zoned": false, 00:15:03.432 "supported_io_types": { 00:15:03.432 "read": true, 00:15:03.432 "write": true, 00:15:03.432 "unmap": true, 00:15:03.432 "flush": true, 00:15:03.432 "reset": true, 00:15:03.432 "nvme_admin": false, 00:15:03.432 "nvme_io": false, 00:15:03.432 "nvme_io_md": false, 00:15:03.432 "write_zeroes": true, 00:15:03.432 "zcopy": true, 00:15:03.432 "get_zone_info": false, 00:15:03.432 "zone_management": false, 00:15:03.432 "zone_append": false, 00:15:03.432 "compare": false, 00:15:03.432 "compare_and_write": false, 00:15:03.432 "abort": true, 00:15:03.432 "seek_hole": false, 00:15:03.432 "seek_data": false, 00:15:03.432 "copy": true, 00:15:03.432 "nvme_iov_md": false 00:15:03.432 }, 00:15:03.432 "memory_domains": [ 00:15:03.432 { 00:15:03.432 "dma_device_id": "system", 00:15:03.432 "dma_device_type": 1 00:15:03.432 }, 00:15:03.432 { 00:15:03.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.432 "dma_device_type": 2 00:15:03.432 } 00:15:03.432 ], 00:15:03.432 "driver_specific": {} 00:15:03.432 } 00:15:03.432 ] 00:15:03.432 12:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.432 12:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:03.432 12:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:03.432 12:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:03.432 12:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:03.432 12:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:03.432 12:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:03.432 12:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:03.432 12:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:03.432 12:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:03.432 12:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.432 12:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.432 12:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.432 12:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.432 12:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.432 12:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.432 12:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.432 12:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:03.432 12:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.432 12:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.432 "name": "Existed_Raid", 00:15:03.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.432 "strip_size_kb": 64, 00:15:03.432 "state": "configuring", 00:15:03.432 "raid_level": "raid0", 00:15:03.432 "superblock": false, 00:15:03.432 "num_base_bdevs": 4, 00:15:03.432 "num_base_bdevs_discovered": 3, 00:15:03.432 "num_base_bdevs_operational": 4, 00:15:03.432 "base_bdevs_list": [ 00:15:03.432 { 00:15:03.432 "name": "BaseBdev1", 00:15:03.432 "uuid": "f6993fcc-b9d2-4dbb-937c-6b0d117d3648", 00:15:03.432 "is_configured": true, 00:15:03.432 "data_offset": 0, 00:15:03.432 "data_size": 65536 00:15:03.432 }, 00:15:03.432 { 00:15:03.432 "name": "BaseBdev2", 00:15:03.432 "uuid": "06b6dc08-d8d3-4850-b4cb-87a4274de5dc", 00:15:03.432 "is_configured": true, 00:15:03.432 "data_offset": 0, 00:15:03.432 "data_size": 65536 00:15:03.432 }, 00:15:03.432 { 00:15:03.432 "name": "BaseBdev3", 00:15:03.432 "uuid": "4726ebea-4023-4010-81ff-f83e4638a623", 00:15:03.432 "is_configured": true, 00:15:03.432 "data_offset": 0, 00:15:03.432 "data_size": 65536 00:15:03.432 }, 00:15:03.432 { 00:15:03.432 "name": "BaseBdev4", 00:15:03.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.432 "is_configured": false, 00:15:03.432 "data_offset": 0, 00:15:03.432 "data_size": 0 00:15:03.432 } 00:15:03.432 ] 00:15:03.432 }' 00:15:03.432 12:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.433 12:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.000 12:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:04.000 12:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.000 12:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.000 [2024-11-25 12:13:59.887417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:04.000 [2024-11-25 12:13:59.887476] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:04.000 [2024-11-25 12:13:59.887496] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:15:04.000 [2024-11-25 12:13:59.887836] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:04.000 [2024-11-25 12:13:59.888063] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:04.000 [2024-11-25 12:13:59.888088] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:04.000 [2024-11-25 12:13:59.888438] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:04.000 BaseBdev4 00:15:04.000 12:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.000 12:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:04.000 12:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:04.000 12:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:04.000 12:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:04.000 12:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:04.000 12:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:04.000 12:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:04.000 12:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.000 12:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.000 12:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.000 12:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:04.000 12:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.000 12:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.000 [ 00:15:04.000 { 00:15:04.000 "name": "BaseBdev4", 00:15:04.000 "aliases": [ 00:15:04.000 "0e472256-dbdf-42cd-81b4-192ff6462e2e" 00:15:04.000 ], 00:15:04.000 "product_name": "Malloc disk", 00:15:04.000 "block_size": 512, 00:15:04.000 "num_blocks": 65536, 00:15:04.000 "uuid": "0e472256-dbdf-42cd-81b4-192ff6462e2e", 00:15:04.000 "assigned_rate_limits": { 00:15:04.000 "rw_ios_per_sec": 0, 00:15:04.000 "rw_mbytes_per_sec": 0, 00:15:04.000 "r_mbytes_per_sec": 0, 00:15:04.000 "w_mbytes_per_sec": 0 00:15:04.000 }, 00:15:04.000 "claimed": true, 00:15:04.000 "claim_type": "exclusive_write", 00:15:04.000 "zoned": false, 00:15:04.000 "supported_io_types": { 00:15:04.000 "read": true, 00:15:04.000 "write": true, 00:15:04.000 "unmap": true, 00:15:04.000 "flush": true, 00:15:04.000 "reset": true, 00:15:04.000 "nvme_admin": false, 00:15:04.000 "nvme_io": false, 00:15:04.000 "nvme_io_md": false, 00:15:04.000 "write_zeroes": true, 00:15:04.000 "zcopy": true, 00:15:04.000 "get_zone_info": false, 00:15:04.000 "zone_management": false, 00:15:04.000 "zone_append": false, 00:15:04.000 "compare": false, 00:15:04.000 "compare_and_write": false, 00:15:04.000 "abort": true, 00:15:04.000 "seek_hole": false, 00:15:04.000 "seek_data": false, 00:15:04.000 "copy": true, 00:15:04.000 "nvme_iov_md": false 00:15:04.000 }, 00:15:04.000 "memory_domains": [ 00:15:04.000 { 00:15:04.000 "dma_device_id": "system", 00:15:04.000 "dma_device_type": 1 00:15:04.000 }, 00:15:04.000 { 00:15:04.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.000 "dma_device_type": 2 00:15:04.000 } 00:15:04.000 ], 00:15:04.000 "driver_specific": {} 00:15:04.000 } 00:15:04.000 ] 00:15:04.000 12:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.000 12:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:04.000 12:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:04.000 12:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:04.000 12:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:15:04.000 12:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:04.000 12:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:04.000 12:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:04.000 12:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:04.001 12:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:04.001 12:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.001 12:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.001 12:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.001 12:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.001 12:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.001 12:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:04.001 12:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.001 12:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.001 12:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.001 12:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.001 "name": "Existed_Raid", 00:15:04.001 "uuid": "533345c6-a23a-49b4-943a-4678a28d1722", 00:15:04.001 "strip_size_kb": 64, 00:15:04.001 "state": "online", 00:15:04.001 "raid_level": "raid0", 00:15:04.001 "superblock": false, 00:15:04.001 "num_base_bdevs": 4, 00:15:04.001 "num_base_bdevs_discovered": 4, 00:15:04.001 "num_base_bdevs_operational": 4, 00:15:04.001 "base_bdevs_list": [ 00:15:04.001 { 00:15:04.001 "name": "BaseBdev1", 00:15:04.001 "uuid": "f6993fcc-b9d2-4dbb-937c-6b0d117d3648", 00:15:04.001 "is_configured": true, 00:15:04.001 "data_offset": 0, 00:15:04.001 "data_size": 65536 00:15:04.001 }, 00:15:04.001 { 00:15:04.001 "name": "BaseBdev2", 00:15:04.001 "uuid": "06b6dc08-d8d3-4850-b4cb-87a4274de5dc", 00:15:04.001 "is_configured": true, 00:15:04.001 "data_offset": 0, 00:15:04.001 "data_size": 65536 00:15:04.001 }, 00:15:04.001 { 00:15:04.001 "name": "BaseBdev3", 00:15:04.001 "uuid": "4726ebea-4023-4010-81ff-f83e4638a623", 00:15:04.001 "is_configured": true, 00:15:04.001 "data_offset": 0, 00:15:04.001 "data_size": 65536 00:15:04.001 }, 00:15:04.001 { 00:15:04.001 "name": "BaseBdev4", 00:15:04.001 "uuid": "0e472256-dbdf-42cd-81b4-192ff6462e2e", 00:15:04.001 "is_configured": true, 00:15:04.001 "data_offset": 0, 00:15:04.001 "data_size": 65536 00:15:04.001 } 00:15:04.001 ] 00:15:04.001 }' 00:15:04.001 12:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.001 12:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.568 12:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:04.568 12:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:04.568 12:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:04.568 12:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:04.568 12:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:04.568 12:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:04.568 12:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:04.568 12:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:04.568 12:14:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.568 12:14:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.568 [2024-11-25 12:14:00.408048] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:04.568 12:14:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.568 12:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:04.568 "name": "Existed_Raid", 00:15:04.568 "aliases": [ 00:15:04.568 "533345c6-a23a-49b4-943a-4678a28d1722" 00:15:04.568 ], 00:15:04.568 "product_name": "Raid Volume", 00:15:04.568 "block_size": 512, 00:15:04.568 "num_blocks": 262144, 00:15:04.568 "uuid": "533345c6-a23a-49b4-943a-4678a28d1722", 00:15:04.568 "assigned_rate_limits": { 00:15:04.568 "rw_ios_per_sec": 0, 00:15:04.568 "rw_mbytes_per_sec": 0, 00:15:04.568 "r_mbytes_per_sec": 0, 00:15:04.568 "w_mbytes_per_sec": 0 00:15:04.568 }, 00:15:04.568 "claimed": false, 00:15:04.568 "zoned": false, 00:15:04.568 "supported_io_types": { 00:15:04.568 "read": true, 00:15:04.568 "write": true, 00:15:04.568 "unmap": true, 00:15:04.568 "flush": true, 00:15:04.568 "reset": true, 00:15:04.568 "nvme_admin": false, 00:15:04.568 "nvme_io": false, 00:15:04.568 "nvme_io_md": false, 00:15:04.568 "write_zeroes": true, 00:15:04.568 "zcopy": false, 00:15:04.568 "get_zone_info": false, 00:15:04.568 "zone_management": false, 00:15:04.568 "zone_append": false, 00:15:04.568 "compare": false, 00:15:04.568 "compare_and_write": false, 00:15:04.568 "abort": false, 00:15:04.568 "seek_hole": false, 00:15:04.568 "seek_data": false, 00:15:04.568 "copy": false, 00:15:04.568 "nvme_iov_md": false 00:15:04.568 }, 00:15:04.568 "memory_domains": [ 00:15:04.568 { 00:15:04.568 "dma_device_id": "system", 00:15:04.568 "dma_device_type": 1 00:15:04.568 }, 00:15:04.568 { 00:15:04.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.568 "dma_device_type": 2 00:15:04.568 }, 00:15:04.568 { 00:15:04.568 "dma_device_id": "system", 00:15:04.568 "dma_device_type": 1 00:15:04.568 }, 00:15:04.568 { 00:15:04.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.568 "dma_device_type": 2 00:15:04.568 }, 00:15:04.568 { 00:15:04.568 "dma_device_id": "system", 00:15:04.568 "dma_device_type": 1 00:15:04.568 }, 00:15:04.568 { 00:15:04.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.568 "dma_device_type": 2 00:15:04.568 }, 00:15:04.568 { 00:15:04.568 "dma_device_id": "system", 00:15:04.568 "dma_device_type": 1 00:15:04.568 }, 00:15:04.568 { 00:15:04.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.568 "dma_device_type": 2 00:15:04.568 } 00:15:04.568 ], 00:15:04.568 "driver_specific": { 00:15:04.568 "raid": { 00:15:04.568 "uuid": "533345c6-a23a-49b4-943a-4678a28d1722", 00:15:04.568 "strip_size_kb": 64, 00:15:04.568 "state": "online", 00:15:04.568 "raid_level": "raid0", 00:15:04.568 "superblock": false, 00:15:04.568 "num_base_bdevs": 4, 00:15:04.568 "num_base_bdevs_discovered": 4, 00:15:04.568 "num_base_bdevs_operational": 4, 00:15:04.568 "base_bdevs_list": [ 00:15:04.568 { 00:15:04.568 "name": "BaseBdev1", 00:15:04.568 "uuid": "f6993fcc-b9d2-4dbb-937c-6b0d117d3648", 00:15:04.568 "is_configured": true, 00:15:04.568 "data_offset": 0, 00:15:04.568 "data_size": 65536 00:15:04.568 }, 00:15:04.568 { 00:15:04.568 "name": "BaseBdev2", 00:15:04.568 "uuid": "06b6dc08-d8d3-4850-b4cb-87a4274de5dc", 00:15:04.568 "is_configured": true, 00:15:04.568 "data_offset": 0, 00:15:04.568 "data_size": 65536 00:15:04.568 }, 00:15:04.568 { 00:15:04.568 "name": "BaseBdev3", 00:15:04.568 "uuid": "4726ebea-4023-4010-81ff-f83e4638a623", 00:15:04.568 "is_configured": true, 00:15:04.568 "data_offset": 0, 00:15:04.568 "data_size": 65536 00:15:04.568 }, 00:15:04.568 { 00:15:04.568 "name": "BaseBdev4", 00:15:04.568 "uuid": "0e472256-dbdf-42cd-81b4-192ff6462e2e", 00:15:04.568 "is_configured": true, 00:15:04.568 "data_offset": 0, 00:15:04.568 "data_size": 65536 00:15:04.568 } 00:15:04.568 ] 00:15:04.568 } 00:15:04.568 } 00:15:04.568 }' 00:15:04.568 12:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:04.568 12:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:04.568 BaseBdev2 00:15:04.568 BaseBdev3 00:15:04.568 BaseBdev4' 00:15:04.568 12:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:04.568 12:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:04.568 12:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:04.568 12:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:04.568 12:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:04.568 12:14:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.568 12:14:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.568 12:14:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.568 12:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:04.568 12:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:04.568 12:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:04.568 12:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:04.568 12:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:04.568 12:14:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.568 12:14:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.568 12:14:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.569 12:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:04.569 12:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:04.569 12:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:04.827 12:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:04.828 12:14:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.828 12:14:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.828 12:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:04.828 12:14:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.828 12:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:04.828 12:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:04.828 12:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:04.828 12:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:04.828 12:14:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.828 12:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:04.828 12:14:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.828 12:14:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.828 12:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:04.828 12:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:04.828 12:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:04.828 12:14:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.828 12:14:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.828 [2024-11-25 12:14:00.771794] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:04.828 [2024-11-25 12:14:00.771834] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:04.828 [2024-11-25 12:14:00.771907] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:04.828 12:14:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.828 12:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:04.828 12:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:15:04.828 12:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:04.828 12:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:15:04.828 12:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:15:04.828 12:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:15:04.828 12:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:04.828 12:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:15:04.828 12:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:04.828 12:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:04.828 12:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:04.828 12:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.828 12:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.828 12:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.828 12:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.828 12:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.828 12:14:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.828 12:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:04.828 12:14:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.828 12:14:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.087 12:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.087 "name": "Existed_Raid", 00:15:05.087 "uuid": "533345c6-a23a-49b4-943a-4678a28d1722", 00:15:05.087 "strip_size_kb": 64, 00:15:05.087 "state": "offline", 00:15:05.087 "raid_level": "raid0", 00:15:05.087 "superblock": false, 00:15:05.087 "num_base_bdevs": 4, 00:15:05.087 "num_base_bdevs_discovered": 3, 00:15:05.087 "num_base_bdevs_operational": 3, 00:15:05.087 "base_bdevs_list": [ 00:15:05.087 { 00:15:05.087 "name": null, 00:15:05.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.087 "is_configured": false, 00:15:05.087 "data_offset": 0, 00:15:05.087 "data_size": 65536 00:15:05.087 }, 00:15:05.087 { 00:15:05.087 "name": "BaseBdev2", 00:15:05.087 "uuid": "06b6dc08-d8d3-4850-b4cb-87a4274de5dc", 00:15:05.087 "is_configured": true, 00:15:05.087 "data_offset": 0, 00:15:05.087 "data_size": 65536 00:15:05.087 }, 00:15:05.087 { 00:15:05.087 "name": "BaseBdev3", 00:15:05.087 "uuid": "4726ebea-4023-4010-81ff-f83e4638a623", 00:15:05.087 "is_configured": true, 00:15:05.087 "data_offset": 0, 00:15:05.087 "data_size": 65536 00:15:05.087 }, 00:15:05.087 { 00:15:05.087 "name": "BaseBdev4", 00:15:05.087 "uuid": "0e472256-dbdf-42cd-81b4-192ff6462e2e", 00:15:05.087 "is_configured": true, 00:15:05.087 "data_offset": 0, 00:15:05.087 "data_size": 65536 00:15:05.087 } 00:15:05.087 ] 00:15:05.087 }' 00:15:05.087 12:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.087 12:14:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.655 12:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:05.655 12:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:05.655 12:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.655 12:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:05.655 12:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.655 12:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.655 12:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.655 12:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:05.655 12:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:05.655 12:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:05.655 12:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.655 12:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.655 [2024-11-25 12:14:01.505310] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:05.655 12:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.655 12:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:05.655 12:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:05.655 12:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.655 12:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:05.655 12:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.655 12:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.655 12:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.655 12:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:05.655 12:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:05.655 12:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:05.655 12:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.655 12:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.655 [2024-11-25 12:14:01.649359] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:05.655 12:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.655 12:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:05.655 12:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:05.655 12:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.655 12:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:05.655 12:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.656 12:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.914 12:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.914 12:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:05.914 12:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:05.914 12:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:05.914 12:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.914 12:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.914 [2024-11-25 12:14:01.790540] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:05.914 [2024-11-25 12:14:01.790602] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:05.914 12:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.914 12:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:05.914 12:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:05.914 12:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.914 12:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.914 12:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.914 12:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:05.914 12:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.914 12:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:05.914 12:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:05.914 12:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:05.914 12:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:05.914 12:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:05.915 12:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:05.915 12:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.915 12:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.915 BaseBdev2 00:15:05.915 12:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.915 12:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:05.915 12:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:05.915 12:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:05.915 12:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:05.915 12:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:05.915 12:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:05.915 12:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:05.915 12:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.915 12:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.915 12:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.915 12:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:05.915 12:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.915 12:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.915 [ 00:15:05.915 { 00:15:05.915 "name": "BaseBdev2", 00:15:05.915 "aliases": [ 00:15:05.915 "415d8ab5-0cec-4447-a1c4-9d16fef0ae95" 00:15:05.915 ], 00:15:05.915 "product_name": "Malloc disk", 00:15:05.915 "block_size": 512, 00:15:05.915 "num_blocks": 65536, 00:15:05.915 "uuid": "415d8ab5-0cec-4447-a1c4-9d16fef0ae95", 00:15:05.915 "assigned_rate_limits": { 00:15:05.915 "rw_ios_per_sec": 0, 00:15:05.915 "rw_mbytes_per_sec": 0, 00:15:05.915 "r_mbytes_per_sec": 0, 00:15:05.915 "w_mbytes_per_sec": 0 00:15:05.915 }, 00:15:05.915 "claimed": false, 00:15:05.915 "zoned": false, 00:15:05.915 "supported_io_types": { 00:15:05.915 "read": true, 00:15:05.915 "write": true, 00:15:05.915 "unmap": true, 00:15:05.915 "flush": true, 00:15:05.915 "reset": true, 00:15:05.915 "nvme_admin": false, 00:15:05.915 "nvme_io": false, 00:15:05.915 "nvme_io_md": false, 00:15:05.915 "write_zeroes": true, 00:15:05.915 "zcopy": true, 00:15:05.915 "get_zone_info": false, 00:15:05.915 "zone_management": false, 00:15:05.915 "zone_append": false, 00:15:05.915 "compare": false, 00:15:05.915 "compare_and_write": false, 00:15:05.915 "abort": true, 00:15:05.915 "seek_hole": false, 00:15:05.915 "seek_data": false, 00:15:05.915 "copy": true, 00:15:05.915 "nvme_iov_md": false 00:15:05.915 }, 00:15:05.915 "memory_domains": [ 00:15:05.915 { 00:15:05.915 "dma_device_id": "system", 00:15:05.915 "dma_device_type": 1 00:15:05.915 }, 00:15:05.915 { 00:15:05.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.915 "dma_device_type": 2 00:15:05.915 } 00:15:05.915 ], 00:15:05.916 "driver_specific": {} 00:15:05.916 } 00:15:05.916 ] 00:15:05.916 12:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.916 12:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:05.916 12:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:05.916 12:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:05.916 12:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:05.916 12:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.916 12:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.175 BaseBdev3 00:15:06.175 12:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.175 12:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:06.175 12:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:06.175 12:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:06.175 12:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:06.175 12:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:06.175 12:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:06.175 12:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:06.175 12:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.175 12:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.175 12:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.175 12:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:06.175 12:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.175 12:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.175 [ 00:15:06.175 { 00:15:06.175 "name": "BaseBdev3", 00:15:06.175 "aliases": [ 00:15:06.175 "dc996e21-5211-4ccc-9456-a26b2d48d4eb" 00:15:06.175 ], 00:15:06.175 "product_name": "Malloc disk", 00:15:06.175 "block_size": 512, 00:15:06.175 "num_blocks": 65536, 00:15:06.175 "uuid": "dc996e21-5211-4ccc-9456-a26b2d48d4eb", 00:15:06.175 "assigned_rate_limits": { 00:15:06.175 "rw_ios_per_sec": 0, 00:15:06.175 "rw_mbytes_per_sec": 0, 00:15:06.175 "r_mbytes_per_sec": 0, 00:15:06.175 "w_mbytes_per_sec": 0 00:15:06.175 }, 00:15:06.175 "claimed": false, 00:15:06.175 "zoned": false, 00:15:06.175 "supported_io_types": { 00:15:06.175 "read": true, 00:15:06.175 "write": true, 00:15:06.175 "unmap": true, 00:15:06.175 "flush": true, 00:15:06.175 "reset": true, 00:15:06.175 "nvme_admin": false, 00:15:06.175 "nvme_io": false, 00:15:06.175 "nvme_io_md": false, 00:15:06.175 "write_zeroes": true, 00:15:06.175 "zcopy": true, 00:15:06.175 "get_zone_info": false, 00:15:06.175 "zone_management": false, 00:15:06.175 "zone_append": false, 00:15:06.175 "compare": false, 00:15:06.175 "compare_and_write": false, 00:15:06.175 "abort": true, 00:15:06.175 "seek_hole": false, 00:15:06.175 "seek_data": false, 00:15:06.175 "copy": true, 00:15:06.176 "nvme_iov_md": false 00:15:06.176 }, 00:15:06.176 "memory_domains": [ 00:15:06.176 { 00:15:06.176 "dma_device_id": "system", 00:15:06.176 "dma_device_type": 1 00:15:06.176 }, 00:15:06.176 { 00:15:06.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:06.176 "dma_device_type": 2 00:15:06.176 } 00:15:06.176 ], 00:15:06.176 "driver_specific": {} 00:15:06.176 } 00:15:06.176 ] 00:15:06.176 12:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.176 12:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:06.176 12:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:06.176 12:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:06.176 12:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:06.176 12:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.176 12:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.176 BaseBdev4 00:15:06.176 12:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.176 12:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:06.176 12:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:06.176 12:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:06.176 12:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:06.176 12:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:06.176 12:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:06.176 12:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:06.176 12:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.176 12:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.176 12:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.176 12:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:06.176 12:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.176 12:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.176 [ 00:15:06.176 { 00:15:06.176 "name": "BaseBdev4", 00:15:06.176 "aliases": [ 00:15:06.176 "f1b49d2d-a424-4433-b424-02465274a8ec" 00:15:06.176 ], 00:15:06.176 "product_name": "Malloc disk", 00:15:06.176 "block_size": 512, 00:15:06.176 "num_blocks": 65536, 00:15:06.176 "uuid": "f1b49d2d-a424-4433-b424-02465274a8ec", 00:15:06.176 "assigned_rate_limits": { 00:15:06.176 "rw_ios_per_sec": 0, 00:15:06.176 "rw_mbytes_per_sec": 0, 00:15:06.176 "r_mbytes_per_sec": 0, 00:15:06.176 "w_mbytes_per_sec": 0 00:15:06.176 }, 00:15:06.176 "claimed": false, 00:15:06.176 "zoned": false, 00:15:06.176 "supported_io_types": { 00:15:06.176 "read": true, 00:15:06.176 "write": true, 00:15:06.176 "unmap": true, 00:15:06.176 "flush": true, 00:15:06.176 "reset": true, 00:15:06.176 "nvme_admin": false, 00:15:06.176 "nvme_io": false, 00:15:06.176 "nvme_io_md": false, 00:15:06.176 "write_zeroes": true, 00:15:06.176 "zcopy": true, 00:15:06.176 "get_zone_info": false, 00:15:06.176 "zone_management": false, 00:15:06.176 "zone_append": false, 00:15:06.176 "compare": false, 00:15:06.176 "compare_and_write": false, 00:15:06.176 "abort": true, 00:15:06.176 "seek_hole": false, 00:15:06.176 "seek_data": false, 00:15:06.176 "copy": true, 00:15:06.176 "nvme_iov_md": false 00:15:06.176 }, 00:15:06.176 "memory_domains": [ 00:15:06.176 { 00:15:06.176 "dma_device_id": "system", 00:15:06.176 "dma_device_type": 1 00:15:06.176 }, 00:15:06.176 { 00:15:06.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:06.176 "dma_device_type": 2 00:15:06.176 } 00:15:06.176 ], 00:15:06.176 "driver_specific": {} 00:15:06.176 } 00:15:06.176 ] 00:15:06.176 12:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.176 12:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:06.176 12:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:06.176 12:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:06.176 12:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:06.176 12:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.176 12:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.176 [2024-11-25 12:14:02.160301] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:06.176 [2024-11-25 12:14:02.160368] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:06.176 [2024-11-25 12:14:02.160407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:06.176 [2024-11-25 12:14:02.162838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:06.176 [2024-11-25 12:14:02.162911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:06.176 12:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.176 12:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:06.176 12:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:06.176 12:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:06.176 12:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:06.176 12:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:06.176 12:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:06.176 12:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.176 12:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.176 12:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.176 12:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.176 12:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.176 12:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.176 12:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:06.176 12:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.176 12:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.176 12:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.176 "name": "Existed_Raid", 00:15:06.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.176 "strip_size_kb": 64, 00:15:06.176 "state": "configuring", 00:15:06.176 "raid_level": "raid0", 00:15:06.176 "superblock": false, 00:15:06.176 "num_base_bdevs": 4, 00:15:06.176 "num_base_bdevs_discovered": 3, 00:15:06.176 "num_base_bdevs_operational": 4, 00:15:06.176 "base_bdevs_list": [ 00:15:06.176 { 00:15:06.176 "name": "BaseBdev1", 00:15:06.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.176 "is_configured": false, 00:15:06.176 "data_offset": 0, 00:15:06.176 "data_size": 0 00:15:06.176 }, 00:15:06.176 { 00:15:06.176 "name": "BaseBdev2", 00:15:06.176 "uuid": "415d8ab5-0cec-4447-a1c4-9d16fef0ae95", 00:15:06.176 "is_configured": true, 00:15:06.176 "data_offset": 0, 00:15:06.176 "data_size": 65536 00:15:06.176 }, 00:15:06.176 { 00:15:06.176 "name": "BaseBdev3", 00:15:06.176 "uuid": "dc996e21-5211-4ccc-9456-a26b2d48d4eb", 00:15:06.176 "is_configured": true, 00:15:06.176 "data_offset": 0, 00:15:06.176 "data_size": 65536 00:15:06.176 }, 00:15:06.176 { 00:15:06.176 "name": "BaseBdev4", 00:15:06.176 "uuid": "f1b49d2d-a424-4433-b424-02465274a8ec", 00:15:06.176 "is_configured": true, 00:15:06.176 "data_offset": 0, 00:15:06.176 "data_size": 65536 00:15:06.176 } 00:15:06.176 ] 00:15:06.176 }' 00:15:06.176 12:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.176 12:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.744 12:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:06.744 12:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.744 12:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.744 [2024-11-25 12:14:02.668480] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:06.744 12:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.744 12:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:06.744 12:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:06.744 12:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:06.744 12:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:06.744 12:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:06.744 12:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:06.744 12:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.744 12:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.744 12:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.744 12:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.744 12:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.744 12:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.744 12:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:06.744 12:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.744 12:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.744 12:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.744 "name": "Existed_Raid", 00:15:06.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.744 "strip_size_kb": 64, 00:15:06.744 "state": "configuring", 00:15:06.744 "raid_level": "raid0", 00:15:06.744 "superblock": false, 00:15:06.744 "num_base_bdevs": 4, 00:15:06.744 "num_base_bdevs_discovered": 2, 00:15:06.744 "num_base_bdevs_operational": 4, 00:15:06.744 "base_bdevs_list": [ 00:15:06.744 { 00:15:06.744 "name": "BaseBdev1", 00:15:06.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.744 "is_configured": false, 00:15:06.744 "data_offset": 0, 00:15:06.744 "data_size": 0 00:15:06.744 }, 00:15:06.744 { 00:15:06.744 "name": null, 00:15:06.744 "uuid": "415d8ab5-0cec-4447-a1c4-9d16fef0ae95", 00:15:06.744 "is_configured": false, 00:15:06.744 "data_offset": 0, 00:15:06.744 "data_size": 65536 00:15:06.744 }, 00:15:06.744 { 00:15:06.744 "name": "BaseBdev3", 00:15:06.744 "uuid": "dc996e21-5211-4ccc-9456-a26b2d48d4eb", 00:15:06.744 "is_configured": true, 00:15:06.744 "data_offset": 0, 00:15:06.744 "data_size": 65536 00:15:06.744 }, 00:15:06.744 { 00:15:06.744 "name": "BaseBdev4", 00:15:06.744 "uuid": "f1b49d2d-a424-4433-b424-02465274a8ec", 00:15:06.744 "is_configured": true, 00:15:06.744 "data_offset": 0, 00:15:06.744 "data_size": 65536 00:15:06.744 } 00:15:06.744 ] 00:15:06.745 }' 00:15:06.745 12:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.745 12:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.312 12:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:07.312 12:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.312 12:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.312 12:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.312 12:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.312 12:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:07.312 12:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:07.312 12:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.312 12:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.312 [2024-11-25 12:14:03.298079] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:07.312 BaseBdev1 00:15:07.312 12:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.312 12:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:07.312 12:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:07.312 12:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:07.312 12:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:07.312 12:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:07.312 12:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:07.312 12:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:07.312 12:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.312 12:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.312 12:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.312 12:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:07.312 12:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.312 12:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.312 [ 00:15:07.312 { 00:15:07.312 "name": "BaseBdev1", 00:15:07.312 "aliases": [ 00:15:07.312 "477d1616-868f-4d73-b83f-8496ba5f5618" 00:15:07.312 ], 00:15:07.312 "product_name": "Malloc disk", 00:15:07.312 "block_size": 512, 00:15:07.312 "num_blocks": 65536, 00:15:07.312 "uuid": "477d1616-868f-4d73-b83f-8496ba5f5618", 00:15:07.312 "assigned_rate_limits": { 00:15:07.312 "rw_ios_per_sec": 0, 00:15:07.312 "rw_mbytes_per_sec": 0, 00:15:07.312 "r_mbytes_per_sec": 0, 00:15:07.312 "w_mbytes_per_sec": 0 00:15:07.312 }, 00:15:07.312 "claimed": true, 00:15:07.312 "claim_type": "exclusive_write", 00:15:07.312 "zoned": false, 00:15:07.312 "supported_io_types": { 00:15:07.312 "read": true, 00:15:07.312 "write": true, 00:15:07.312 "unmap": true, 00:15:07.312 "flush": true, 00:15:07.312 "reset": true, 00:15:07.312 "nvme_admin": false, 00:15:07.312 "nvme_io": false, 00:15:07.312 "nvme_io_md": false, 00:15:07.312 "write_zeroes": true, 00:15:07.312 "zcopy": true, 00:15:07.312 "get_zone_info": false, 00:15:07.312 "zone_management": false, 00:15:07.312 "zone_append": false, 00:15:07.312 "compare": false, 00:15:07.312 "compare_and_write": false, 00:15:07.312 "abort": true, 00:15:07.312 "seek_hole": false, 00:15:07.312 "seek_data": false, 00:15:07.312 "copy": true, 00:15:07.312 "nvme_iov_md": false 00:15:07.312 }, 00:15:07.312 "memory_domains": [ 00:15:07.312 { 00:15:07.313 "dma_device_id": "system", 00:15:07.313 "dma_device_type": 1 00:15:07.313 }, 00:15:07.313 { 00:15:07.313 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:07.313 "dma_device_type": 2 00:15:07.313 } 00:15:07.313 ], 00:15:07.313 "driver_specific": {} 00:15:07.313 } 00:15:07.313 ] 00:15:07.313 12:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.313 12:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:07.313 12:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:07.313 12:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:07.313 12:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:07.313 12:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:07.313 12:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:07.313 12:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:07.313 12:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:07.313 12:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:07.313 12:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:07.313 12:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:07.313 12:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:07.313 12:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.313 12:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.313 12:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.313 12:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.313 12:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.313 "name": "Existed_Raid", 00:15:07.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.313 "strip_size_kb": 64, 00:15:07.313 "state": "configuring", 00:15:07.313 "raid_level": "raid0", 00:15:07.313 "superblock": false, 00:15:07.313 "num_base_bdevs": 4, 00:15:07.313 "num_base_bdevs_discovered": 3, 00:15:07.313 "num_base_bdevs_operational": 4, 00:15:07.313 "base_bdevs_list": [ 00:15:07.313 { 00:15:07.313 "name": "BaseBdev1", 00:15:07.313 "uuid": "477d1616-868f-4d73-b83f-8496ba5f5618", 00:15:07.313 "is_configured": true, 00:15:07.313 "data_offset": 0, 00:15:07.313 "data_size": 65536 00:15:07.313 }, 00:15:07.313 { 00:15:07.313 "name": null, 00:15:07.313 "uuid": "415d8ab5-0cec-4447-a1c4-9d16fef0ae95", 00:15:07.313 "is_configured": false, 00:15:07.313 "data_offset": 0, 00:15:07.313 "data_size": 65536 00:15:07.313 }, 00:15:07.313 { 00:15:07.313 "name": "BaseBdev3", 00:15:07.313 "uuid": "dc996e21-5211-4ccc-9456-a26b2d48d4eb", 00:15:07.313 "is_configured": true, 00:15:07.313 "data_offset": 0, 00:15:07.313 "data_size": 65536 00:15:07.313 }, 00:15:07.313 { 00:15:07.313 "name": "BaseBdev4", 00:15:07.313 "uuid": "f1b49d2d-a424-4433-b424-02465274a8ec", 00:15:07.313 "is_configured": true, 00:15:07.313 "data_offset": 0, 00:15:07.313 "data_size": 65536 00:15:07.313 } 00:15:07.313 ] 00:15:07.313 }' 00:15:07.313 12:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.313 12:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.880 12:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.880 12:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:07.880 12:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.880 12:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.880 12:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.880 12:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:07.880 12:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:07.880 12:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.880 12:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.880 [2024-11-25 12:14:03.898375] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:07.880 12:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.880 12:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:07.880 12:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:07.880 12:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:07.880 12:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:07.880 12:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:07.880 12:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:07.880 12:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:07.880 12:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:07.880 12:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:07.880 12:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:07.881 12:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.881 12:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.881 12:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:07.881 12:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.881 12:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.881 12:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.881 "name": "Existed_Raid", 00:15:07.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.881 "strip_size_kb": 64, 00:15:07.881 "state": "configuring", 00:15:07.881 "raid_level": "raid0", 00:15:07.881 "superblock": false, 00:15:07.881 "num_base_bdevs": 4, 00:15:07.881 "num_base_bdevs_discovered": 2, 00:15:07.881 "num_base_bdevs_operational": 4, 00:15:07.881 "base_bdevs_list": [ 00:15:07.881 { 00:15:07.881 "name": "BaseBdev1", 00:15:07.881 "uuid": "477d1616-868f-4d73-b83f-8496ba5f5618", 00:15:07.881 "is_configured": true, 00:15:07.881 "data_offset": 0, 00:15:07.881 "data_size": 65536 00:15:07.881 }, 00:15:07.881 { 00:15:07.881 "name": null, 00:15:07.881 "uuid": "415d8ab5-0cec-4447-a1c4-9d16fef0ae95", 00:15:07.881 "is_configured": false, 00:15:07.881 "data_offset": 0, 00:15:07.881 "data_size": 65536 00:15:07.881 }, 00:15:07.881 { 00:15:07.881 "name": null, 00:15:07.881 "uuid": "dc996e21-5211-4ccc-9456-a26b2d48d4eb", 00:15:07.881 "is_configured": false, 00:15:07.881 "data_offset": 0, 00:15:07.881 "data_size": 65536 00:15:07.881 }, 00:15:07.881 { 00:15:07.881 "name": "BaseBdev4", 00:15:07.881 "uuid": "f1b49d2d-a424-4433-b424-02465274a8ec", 00:15:07.881 "is_configured": true, 00:15:07.881 "data_offset": 0, 00:15:07.881 "data_size": 65536 00:15:07.881 } 00:15:07.881 ] 00:15:07.881 }' 00:15:07.881 12:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.881 12:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.448 12:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:08.448 12:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.448 12:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.448 12:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.448 12:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.448 12:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:08.448 12:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:08.448 12:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.448 12:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.448 [2024-11-25 12:14:04.458497] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:08.448 12:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.448 12:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:08.448 12:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:08.448 12:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:08.448 12:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:08.448 12:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:08.448 12:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:08.448 12:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.448 12:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.448 12:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.448 12:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:08.448 12:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.448 12:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.448 12:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:08.448 12:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.448 12:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.448 12:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:08.448 "name": "Existed_Raid", 00:15:08.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.448 "strip_size_kb": 64, 00:15:08.448 "state": "configuring", 00:15:08.448 "raid_level": "raid0", 00:15:08.448 "superblock": false, 00:15:08.448 "num_base_bdevs": 4, 00:15:08.448 "num_base_bdevs_discovered": 3, 00:15:08.448 "num_base_bdevs_operational": 4, 00:15:08.448 "base_bdevs_list": [ 00:15:08.448 { 00:15:08.448 "name": "BaseBdev1", 00:15:08.448 "uuid": "477d1616-868f-4d73-b83f-8496ba5f5618", 00:15:08.448 "is_configured": true, 00:15:08.448 "data_offset": 0, 00:15:08.448 "data_size": 65536 00:15:08.448 }, 00:15:08.448 { 00:15:08.448 "name": null, 00:15:08.448 "uuid": "415d8ab5-0cec-4447-a1c4-9d16fef0ae95", 00:15:08.448 "is_configured": false, 00:15:08.448 "data_offset": 0, 00:15:08.448 "data_size": 65536 00:15:08.448 }, 00:15:08.448 { 00:15:08.448 "name": "BaseBdev3", 00:15:08.448 "uuid": "dc996e21-5211-4ccc-9456-a26b2d48d4eb", 00:15:08.448 "is_configured": true, 00:15:08.448 "data_offset": 0, 00:15:08.448 "data_size": 65536 00:15:08.448 }, 00:15:08.448 { 00:15:08.448 "name": "BaseBdev4", 00:15:08.448 "uuid": "f1b49d2d-a424-4433-b424-02465274a8ec", 00:15:08.448 "is_configured": true, 00:15:08.448 "data_offset": 0, 00:15:08.448 "data_size": 65536 00:15:08.448 } 00:15:08.448 ] 00:15:08.448 }' 00:15:08.448 12:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:08.448 12:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.015 12:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.015 12:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:09.015 12:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.015 12:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.015 12:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.015 12:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:09.015 12:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:09.015 12:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.015 12:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.015 [2024-11-25 12:14:05.002679] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:09.015 12:14:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.015 12:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:09.015 12:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:09.015 12:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:09.015 12:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:09.015 12:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:09.015 12:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:09.015 12:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.015 12:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.015 12:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.015 12:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.015 12:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.015 12:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:09.015 12:14:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.015 12:14:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.273 12:14:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.273 12:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.273 "name": "Existed_Raid", 00:15:09.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.273 "strip_size_kb": 64, 00:15:09.273 "state": "configuring", 00:15:09.273 "raid_level": "raid0", 00:15:09.273 "superblock": false, 00:15:09.273 "num_base_bdevs": 4, 00:15:09.273 "num_base_bdevs_discovered": 2, 00:15:09.273 "num_base_bdevs_operational": 4, 00:15:09.273 "base_bdevs_list": [ 00:15:09.273 { 00:15:09.273 "name": null, 00:15:09.273 "uuid": "477d1616-868f-4d73-b83f-8496ba5f5618", 00:15:09.273 "is_configured": false, 00:15:09.273 "data_offset": 0, 00:15:09.273 "data_size": 65536 00:15:09.273 }, 00:15:09.273 { 00:15:09.273 "name": null, 00:15:09.273 "uuid": "415d8ab5-0cec-4447-a1c4-9d16fef0ae95", 00:15:09.273 "is_configured": false, 00:15:09.273 "data_offset": 0, 00:15:09.273 "data_size": 65536 00:15:09.273 }, 00:15:09.273 { 00:15:09.273 "name": "BaseBdev3", 00:15:09.273 "uuid": "dc996e21-5211-4ccc-9456-a26b2d48d4eb", 00:15:09.273 "is_configured": true, 00:15:09.273 "data_offset": 0, 00:15:09.273 "data_size": 65536 00:15:09.273 }, 00:15:09.273 { 00:15:09.273 "name": "BaseBdev4", 00:15:09.273 "uuid": "f1b49d2d-a424-4433-b424-02465274a8ec", 00:15:09.273 "is_configured": true, 00:15:09.273 "data_offset": 0, 00:15:09.273 "data_size": 65536 00:15:09.273 } 00:15:09.273 ] 00:15:09.273 }' 00:15:09.273 12:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.273 12:14:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.840 12:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.840 12:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:09.840 12:14:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.840 12:14:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.840 12:14:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.840 12:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:09.840 12:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:09.840 12:14:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.840 12:14:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.840 [2024-11-25 12:14:05.716319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:09.840 12:14:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.840 12:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:09.840 12:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:09.840 12:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:09.840 12:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:09.840 12:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:09.840 12:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:09.840 12:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.840 12:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.840 12:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.840 12:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.840 12:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.840 12:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:09.840 12:14:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.840 12:14:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.840 12:14:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.840 12:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.840 "name": "Existed_Raid", 00:15:09.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.840 "strip_size_kb": 64, 00:15:09.840 "state": "configuring", 00:15:09.840 "raid_level": "raid0", 00:15:09.840 "superblock": false, 00:15:09.840 "num_base_bdevs": 4, 00:15:09.840 "num_base_bdevs_discovered": 3, 00:15:09.840 "num_base_bdevs_operational": 4, 00:15:09.840 "base_bdevs_list": [ 00:15:09.840 { 00:15:09.840 "name": null, 00:15:09.840 "uuid": "477d1616-868f-4d73-b83f-8496ba5f5618", 00:15:09.840 "is_configured": false, 00:15:09.840 "data_offset": 0, 00:15:09.840 "data_size": 65536 00:15:09.840 }, 00:15:09.840 { 00:15:09.840 "name": "BaseBdev2", 00:15:09.840 "uuid": "415d8ab5-0cec-4447-a1c4-9d16fef0ae95", 00:15:09.840 "is_configured": true, 00:15:09.840 "data_offset": 0, 00:15:09.840 "data_size": 65536 00:15:09.840 }, 00:15:09.840 { 00:15:09.840 "name": "BaseBdev3", 00:15:09.840 "uuid": "dc996e21-5211-4ccc-9456-a26b2d48d4eb", 00:15:09.840 "is_configured": true, 00:15:09.840 "data_offset": 0, 00:15:09.840 "data_size": 65536 00:15:09.840 }, 00:15:09.840 { 00:15:09.840 "name": "BaseBdev4", 00:15:09.840 "uuid": "f1b49d2d-a424-4433-b424-02465274a8ec", 00:15:09.840 "is_configured": true, 00:15:09.840 "data_offset": 0, 00:15:09.840 "data_size": 65536 00:15:09.840 } 00:15:09.840 ] 00:15:09.840 }' 00:15:09.840 12:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.840 12:14:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.415 12:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:10.415 12:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.415 12:14:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.415 12:14:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.416 12:14:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.416 12:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:10.416 12:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.416 12:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:10.416 12:14:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.416 12:14:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.416 12:14:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.416 12:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 477d1616-868f-4d73-b83f-8496ba5f5618 00:15:10.416 12:14:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.416 12:14:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.416 [2024-11-25 12:14:06.383053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:10.416 [2024-11-25 12:14:06.383123] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:10.416 [2024-11-25 12:14:06.383136] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:15:10.416 [2024-11-25 12:14:06.383513] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:10.416 [2024-11-25 12:14:06.383711] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:10.416 [2024-11-25 12:14:06.383733] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:10.416 [2024-11-25 12:14:06.384042] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:10.416 NewBaseBdev 00:15:10.416 12:14:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.416 12:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:10.416 12:14:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:10.416 12:14:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:10.416 12:14:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:10.416 12:14:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:10.416 12:14:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:10.416 12:14:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:10.416 12:14:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.416 12:14:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.416 12:14:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.416 12:14:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:10.416 12:14:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.416 12:14:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.416 [ 00:15:10.416 { 00:15:10.416 "name": "NewBaseBdev", 00:15:10.416 "aliases": [ 00:15:10.416 "477d1616-868f-4d73-b83f-8496ba5f5618" 00:15:10.416 ], 00:15:10.416 "product_name": "Malloc disk", 00:15:10.416 "block_size": 512, 00:15:10.416 "num_blocks": 65536, 00:15:10.416 "uuid": "477d1616-868f-4d73-b83f-8496ba5f5618", 00:15:10.416 "assigned_rate_limits": { 00:15:10.416 "rw_ios_per_sec": 0, 00:15:10.416 "rw_mbytes_per_sec": 0, 00:15:10.416 "r_mbytes_per_sec": 0, 00:15:10.416 "w_mbytes_per_sec": 0 00:15:10.416 }, 00:15:10.416 "claimed": true, 00:15:10.416 "claim_type": "exclusive_write", 00:15:10.416 "zoned": false, 00:15:10.416 "supported_io_types": { 00:15:10.416 "read": true, 00:15:10.416 "write": true, 00:15:10.416 "unmap": true, 00:15:10.416 "flush": true, 00:15:10.416 "reset": true, 00:15:10.416 "nvme_admin": false, 00:15:10.416 "nvme_io": false, 00:15:10.416 "nvme_io_md": false, 00:15:10.416 "write_zeroes": true, 00:15:10.416 "zcopy": true, 00:15:10.416 "get_zone_info": false, 00:15:10.416 "zone_management": false, 00:15:10.416 "zone_append": false, 00:15:10.416 "compare": false, 00:15:10.416 "compare_and_write": false, 00:15:10.416 "abort": true, 00:15:10.416 "seek_hole": false, 00:15:10.416 "seek_data": false, 00:15:10.416 "copy": true, 00:15:10.416 "nvme_iov_md": false 00:15:10.416 }, 00:15:10.416 "memory_domains": [ 00:15:10.416 { 00:15:10.416 "dma_device_id": "system", 00:15:10.416 "dma_device_type": 1 00:15:10.416 }, 00:15:10.416 { 00:15:10.416 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.416 "dma_device_type": 2 00:15:10.416 } 00:15:10.416 ], 00:15:10.416 "driver_specific": {} 00:15:10.416 } 00:15:10.416 ] 00:15:10.416 12:14:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.416 12:14:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:10.416 12:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:15:10.416 12:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:10.416 12:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:10.416 12:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:10.416 12:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:10.416 12:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:10.416 12:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.416 12:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.416 12:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.416 12:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.416 12:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.416 12:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:10.416 12:14:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.416 12:14:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.416 12:14:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.416 12:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.416 "name": "Existed_Raid", 00:15:10.416 "uuid": "5f4904a6-034d-49ec-a454-575a65bfddd2", 00:15:10.416 "strip_size_kb": 64, 00:15:10.416 "state": "online", 00:15:10.416 "raid_level": "raid0", 00:15:10.416 "superblock": false, 00:15:10.416 "num_base_bdevs": 4, 00:15:10.416 "num_base_bdevs_discovered": 4, 00:15:10.416 "num_base_bdevs_operational": 4, 00:15:10.416 "base_bdevs_list": [ 00:15:10.416 { 00:15:10.416 "name": "NewBaseBdev", 00:15:10.416 "uuid": "477d1616-868f-4d73-b83f-8496ba5f5618", 00:15:10.416 "is_configured": true, 00:15:10.416 "data_offset": 0, 00:15:10.416 "data_size": 65536 00:15:10.416 }, 00:15:10.416 { 00:15:10.416 "name": "BaseBdev2", 00:15:10.416 "uuid": "415d8ab5-0cec-4447-a1c4-9d16fef0ae95", 00:15:10.416 "is_configured": true, 00:15:10.416 "data_offset": 0, 00:15:10.416 "data_size": 65536 00:15:10.416 }, 00:15:10.416 { 00:15:10.416 "name": "BaseBdev3", 00:15:10.416 "uuid": "dc996e21-5211-4ccc-9456-a26b2d48d4eb", 00:15:10.416 "is_configured": true, 00:15:10.416 "data_offset": 0, 00:15:10.416 "data_size": 65536 00:15:10.416 }, 00:15:10.416 { 00:15:10.416 "name": "BaseBdev4", 00:15:10.416 "uuid": "f1b49d2d-a424-4433-b424-02465274a8ec", 00:15:10.416 "is_configured": true, 00:15:10.416 "data_offset": 0, 00:15:10.416 "data_size": 65536 00:15:10.416 } 00:15:10.416 ] 00:15:10.416 }' 00:15:10.416 12:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.416 12:14:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.984 12:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:10.984 12:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:10.984 12:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:10.984 12:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:10.984 12:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:10.984 12:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:10.984 12:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:10.984 12:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:10.984 12:14:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.984 12:14:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.984 [2024-11-25 12:14:06.931732] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:10.984 12:14:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.984 12:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:10.984 "name": "Existed_Raid", 00:15:10.984 "aliases": [ 00:15:10.984 "5f4904a6-034d-49ec-a454-575a65bfddd2" 00:15:10.984 ], 00:15:10.984 "product_name": "Raid Volume", 00:15:10.984 "block_size": 512, 00:15:10.984 "num_blocks": 262144, 00:15:10.984 "uuid": "5f4904a6-034d-49ec-a454-575a65bfddd2", 00:15:10.984 "assigned_rate_limits": { 00:15:10.984 "rw_ios_per_sec": 0, 00:15:10.984 "rw_mbytes_per_sec": 0, 00:15:10.984 "r_mbytes_per_sec": 0, 00:15:10.984 "w_mbytes_per_sec": 0 00:15:10.984 }, 00:15:10.984 "claimed": false, 00:15:10.984 "zoned": false, 00:15:10.984 "supported_io_types": { 00:15:10.984 "read": true, 00:15:10.984 "write": true, 00:15:10.984 "unmap": true, 00:15:10.984 "flush": true, 00:15:10.984 "reset": true, 00:15:10.984 "nvme_admin": false, 00:15:10.984 "nvme_io": false, 00:15:10.984 "nvme_io_md": false, 00:15:10.984 "write_zeroes": true, 00:15:10.984 "zcopy": false, 00:15:10.984 "get_zone_info": false, 00:15:10.984 "zone_management": false, 00:15:10.984 "zone_append": false, 00:15:10.984 "compare": false, 00:15:10.984 "compare_and_write": false, 00:15:10.984 "abort": false, 00:15:10.984 "seek_hole": false, 00:15:10.984 "seek_data": false, 00:15:10.984 "copy": false, 00:15:10.984 "nvme_iov_md": false 00:15:10.984 }, 00:15:10.984 "memory_domains": [ 00:15:10.984 { 00:15:10.984 "dma_device_id": "system", 00:15:10.984 "dma_device_type": 1 00:15:10.984 }, 00:15:10.984 { 00:15:10.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.984 "dma_device_type": 2 00:15:10.984 }, 00:15:10.984 { 00:15:10.984 "dma_device_id": "system", 00:15:10.984 "dma_device_type": 1 00:15:10.985 }, 00:15:10.985 { 00:15:10.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.985 "dma_device_type": 2 00:15:10.985 }, 00:15:10.985 { 00:15:10.985 "dma_device_id": "system", 00:15:10.985 "dma_device_type": 1 00:15:10.985 }, 00:15:10.985 { 00:15:10.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.985 "dma_device_type": 2 00:15:10.985 }, 00:15:10.985 { 00:15:10.985 "dma_device_id": "system", 00:15:10.985 "dma_device_type": 1 00:15:10.985 }, 00:15:10.985 { 00:15:10.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.985 "dma_device_type": 2 00:15:10.985 } 00:15:10.985 ], 00:15:10.985 "driver_specific": { 00:15:10.985 "raid": { 00:15:10.985 "uuid": "5f4904a6-034d-49ec-a454-575a65bfddd2", 00:15:10.985 "strip_size_kb": 64, 00:15:10.985 "state": "online", 00:15:10.985 "raid_level": "raid0", 00:15:10.985 "superblock": false, 00:15:10.985 "num_base_bdevs": 4, 00:15:10.985 "num_base_bdevs_discovered": 4, 00:15:10.985 "num_base_bdevs_operational": 4, 00:15:10.985 "base_bdevs_list": [ 00:15:10.985 { 00:15:10.985 "name": "NewBaseBdev", 00:15:10.985 "uuid": "477d1616-868f-4d73-b83f-8496ba5f5618", 00:15:10.985 "is_configured": true, 00:15:10.985 "data_offset": 0, 00:15:10.985 "data_size": 65536 00:15:10.985 }, 00:15:10.985 { 00:15:10.985 "name": "BaseBdev2", 00:15:10.985 "uuid": "415d8ab5-0cec-4447-a1c4-9d16fef0ae95", 00:15:10.985 "is_configured": true, 00:15:10.985 "data_offset": 0, 00:15:10.985 "data_size": 65536 00:15:10.985 }, 00:15:10.985 { 00:15:10.985 "name": "BaseBdev3", 00:15:10.985 "uuid": "dc996e21-5211-4ccc-9456-a26b2d48d4eb", 00:15:10.985 "is_configured": true, 00:15:10.985 "data_offset": 0, 00:15:10.985 "data_size": 65536 00:15:10.985 }, 00:15:10.985 { 00:15:10.985 "name": "BaseBdev4", 00:15:10.985 "uuid": "f1b49d2d-a424-4433-b424-02465274a8ec", 00:15:10.985 "is_configured": true, 00:15:10.985 "data_offset": 0, 00:15:10.985 "data_size": 65536 00:15:10.985 } 00:15:10.985 ] 00:15:10.985 } 00:15:10.985 } 00:15:10.985 }' 00:15:10.985 12:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:10.985 12:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:10.985 BaseBdev2 00:15:10.985 BaseBdev3 00:15:10.985 BaseBdev4' 00:15:10.985 12:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:11.244 12:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:11.244 12:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:11.244 12:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:11.244 12:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:11.244 12:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.244 12:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.244 12:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.244 12:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:11.244 12:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:11.244 12:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:11.244 12:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:11.244 12:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:11.244 12:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.244 12:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.244 12:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.244 12:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:11.244 12:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:11.244 12:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:11.244 12:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:11.244 12:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:11.244 12:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.244 12:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.244 12:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.244 12:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:11.244 12:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:11.244 12:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:11.244 12:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:11.244 12:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.244 12:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:11.244 12:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.244 12:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.244 12:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:11.244 12:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:11.244 12:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:11.244 12:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.244 12:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.502 [2024-11-25 12:14:07.335404] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:11.502 [2024-11-25 12:14:07.335461] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:11.502 [2024-11-25 12:14:07.335568] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:11.502 [2024-11-25 12:14:07.335670] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:11.502 [2024-11-25 12:14:07.335687] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:11.502 12:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.502 12:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69469 00:15:11.502 12:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69469 ']' 00:15:11.502 12:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69469 00:15:11.502 12:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:15:11.502 12:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:11.502 12:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69469 00:15:11.502 killing process with pid 69469 00:15:11.502 12:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:11.502 12:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:11.502 12:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69469' 00:15:11.502 12:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69469 00:15:11.502 [2024-11-25 12:14:07.369736] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:11.502 12:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69469 00:15:11.761 [2024-11-25 12:14:07.727137] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:12.695 ************************************ 00:15:12.695 END TEST raid_state_function_test 00:15:12.695 ************************************ 00:15:12.695 12:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:12.695 00:15:12.695 real 0m12.942s 00:15:12.695 user 0m21.442s 00:15:12.695 sys 0m1.821s 00:15:12.695 12:14:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:12.695 12:14:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.954 12:14:08 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:15:12.954 12:14:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:12.954 12:14:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:12.954 12:14:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:12.954 ************************************ 00:15:12.954 START TEST raid_state_function_test_sb 00:15:12.954 ************************************ 00:15:12.954 12:14:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:15:12.954 12:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:15:12.954 12:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:12.954 12:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:12.954 12:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:12.954 12:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:12.954 12:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:12.954 12:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:12.954 12:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:12.954 12:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:12.954 12:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:12.954 12:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:12.954 12:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:12.954 12:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:12.954 12:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:12.954 12:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:12.954 12:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:12.954 12:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:12.954 12:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:12.954 12:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:12.954 12:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:12.954 12:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:12.954 12:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:12.954 12:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:12.954 12:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:12.954 12:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:15:12.954 12:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:12.954 12:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:12.954 12:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:12.954 12:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:12.954 12:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70154 00:15:12.954 12:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:12.954 Process raid pid: 70154 00:15:12.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:12.954 12:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70154' 00:15:12.954 12:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70154 00:15:12.955 12:14:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 70154 ']' 00:15:12.955 12:14:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:12.955 12:14:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:12.955 12:14:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:12.955 12:14:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:12.955 12:14:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.955 [2024-11-25 12:14:08.944760] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:15:12.955 [2024-11-25 12:14:08.945803] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:13.213 [2024-11-25 12:14:09.138164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.213 [2024-11-25 12:14:09.270484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:13.471 [2024-11-25 12:14:09.476180] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:13.471 [2024-11-25 12:14:09.476476] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:14.037 12:14:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:14.037 12:14:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:14.037 12:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:14.037 12:14:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.037 12:14:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.037 [2024-11-25 12:14:09.927745] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:14.037 [2024-11-25 12:14:09.927818] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:14.037 [2024-11-25 12:14:09.927837] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:14.037 [2024-11-25 12:14:09.927853] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:14.037 [2024-11-25 12:14:09.927863] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:14.037 [2024-11-25 12:14:09.927877] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:14.037 [2024-11-25 12:14:09.927887] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:14.037 [2024-11-25 12:14:09.927900] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:14.037 12:14:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.037 12:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:14.037 12:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:14.037 12:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:14.037 12:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:14.037 12:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:14.037 12:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:14.037 12:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.037 12:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.037 12:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.037 12:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.037 12:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.037 12:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:14.037 12:14:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.037 12:14:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.037 12:14:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.037 12:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.037 "name": "Existed_Raid", 00:15:14.037 "uuid": "890fa1fd-4d80-4aed-88bf-8d69e1b988a0", 00:15:14.037 "strip_size_kb": 64, 00:15:14.037 "state": "configuring", 00:15:14.037 "raid_level": "raid0", 00:15:14.037 "superblock": true, 00:15:14.037 "num_base_bdevs": 4, 00:15:14.037 "num_base_bdevs_discovered": 0, 00:15:14.037 "num_base_bdevs_operational": 4, 00:15:14.037 "base_bdevs_list": [ 00:15:14.037 { 00:15:14.037 "name": "BaseBdev1", 00:15:14.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.037 "is_configured": false, 00:15:14.037 "data_offset": 0, 00:15:14.037 "data_size": 0 00:15:14.037 }, 00:15:14.037 { 00:15:14.037 "name": "BaseBdev2", 00:15:14.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.037 "is_configured": false, 00:15:14.037 "data_offset": 0, 00:15:14.037 "data_size": 0 00:15:14.037 }, 00:15:14.037 { 00:15:14.037 "name": "BaseBdev3", 00:15:14.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.037 "is_configured": false, 00:15:14.037 "data_offset": 0, 00:15:14.037 "data_size": 0 00:15:14.037 }, 00:15:14.037 { 00:15:14.037 "name": "BaseBdev4", 00:15:14.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.037 "is_configured": false, 00:15:14.037 "data_offset": 0, 00:15:14.037 "data_size": 0 00:15:14.037 } 00:15:14.037 ] 00:15:14.037 }' 00:15:14.037 12:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.037 12:14:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.604 12:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:14.604 12:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.604 12:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.604 [2024-11-25 12:14:10.411764] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:14.604 [2024-11-25 12:14:10.411836] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:14.604 12:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.604 12:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:14.604 12:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.604 12:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.604 [2024-11-25 12:14:10.419742] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:14.604 [2024-11-25 12:14:10.419795] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:14.604 [2024-11-25 12:14:10.419810] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:14.604 [2024-11-25 12:14:10.419827] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:14.604 [2024-11-25 12:14:10.419836] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:14.604 [2024-11-25 12:14:10.419850] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:14.604 [2024-11-25 12:14:10.419860] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:14.604 [2024-11-25 12:14:10.419873] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:14.604 12:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.604 12:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:14.604 12:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.604 12:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.604 [2024-11-25 12:14:10.464468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:14.604 BaseBdev1 00:15:14.604 12:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.604 12:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:14.604 12:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:14.604 12:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:14.604 12:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:14.604 12:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:14.604 12:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:14.604 12:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:14.604 12:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.604 12:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.604 12:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.604 12:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:14.604 12:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.604 12:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.604 [ 00:15:14.604 { 00:15:14.604 "name": "BaseBdev1", 00:15:14.604 "aliases": [ 00:15:14.604 "bdcec644-bfe4-49ae-b7c2-1690bb770ebf" 00:15:14.604 ], 00:15:14.604 "product_name": "Malloc disk", 00:15:14.604 "block_size": 512, 00:15:14.604 "num_blocks": 65536, 00:15:14.604 "uuid": "bdcec644-bfe4-49ae-b7c2-1690bb770ebf", 00:15:14.604 "assigned_rate_limits": { 00:15:14.604 "rw_ios_per_sec": 0, 00:15:14.604 "rw_mbytes_per_sec": 0, 00:15:14.604 "r_mbytes_per_sec": 0, 00:15:14.604 "w_mbytes_per_sec": 0 00:15:14.604 }, 00:15:14.604 "claimed": true, 00:15:14.604 "claim_type": "exclusive_write", 00:15:14.604 "zoned": false, 00:15:14.604 "supported_io_types": { 00:15:14.604 "read": true, 00:15:14.604 "write": true, 00:15:14.604 "unmap": true, 00:15:14.604 "flush": true, 00:15:14.604 "reset": true, 00:15:14.604 "nvme_admin": false, 00:15:14.604 "nvme_io": false, 00:15:14.604 "nvme_io_md": false, 00:15:14.604 "write_zeroes": true, 00:15:14.604 "zcopy": true, 00:15:14.604 "get_zone_info": false, 00:15:14.604 "zone_management": false, 00:15:14.604 "zone_append": false, 00:15:14.604 "compare": false, 00:15:14.604 "compare_and_write": false, 00:15:14.604 "abort": true, 00:15:14.604 "seek_hole": false, 00:15:14.604 "seek_data": false, 00:15:14.604 "copy": true, 00:15:14.604 "nvme_iov_md": false 00:15:14.604 }, 00:15:14.604 "memory_domains": [ 00:15:14.604 { 00:15:14.604 "dma_device_id": "system", 00:15:14.604 "dma_device_type": 1 00:15:14.604 }, 00:15:14.604 { 00:15:14.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:14.604 "dma_device_type": 2 00:15:14.604 } 00:15:14.604 ], 00:15:14.604 "driver_specific": {} 00:15:14.604 } 00:15:14.604 ] 00:15:14.604 12:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.604 12:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:14.604 12:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:14.604 12:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:14.604 12:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:14.604 12:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:14.604 12:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:14.604 12:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:14.604 12:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.604 12:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.604 12:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.604 12:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.604 12:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.604 12:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.604 12:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.604 12:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:14.604 12:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.604 12:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.604 "name": "Existed_Raid", 00:15:14.604 "uuid": "fbc10931-a718-4498-a466-af6e347daf3d", 00:15:14.604 "strip_size_kb": 64, 00:15:14.604 "state": "configuring", 00:15:14.604 "raid_level": "raid0", 00:15:14.604 "superblock": true, 00:15:14.604 "num_base_bdevs": 4, 00:15:14.604 "num_base_bdevs_discovered": 1, 00:15:14.604 "num_base_bdevs_operational": 4, 00:15:14.604 "base_bdevs_list": [ 00:15:14.604 { 00:15:14.604 "name": "BaseBdev1", 00:15:14.604 "uuid": "bdcec644-bfe4-49ae-b7c2-1690bb770ebf", 00:15:14.604 "is_configured": true, 00:15:14.604 "data_offset": 2048, 00:15:14.604 "data_size": 63488 00:15:14.604 }, 00:15:14.604 { 00:15:14.604 "name": "BaseBdev2", 00:15:14.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.604 "is_configured": false, 00:15:14.604 "data_offset": 0, 00:15:14.604 "data_size": 0 00:15:14.604 }, 00:15:14.604 { 00:15:14.604 "name": "BaseBdev3", 00:15:14.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.604 "is_configured": false, 00:15:14.605 "data_offset": 0, 00:15:14.605 "data_size": 0 00:15:14.605 }, 00:15:14.605 { 00:15:14.605 "name": "BaseBdev4", 00:15:14.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.605 "is_configured": false, 00:15:14.605 "data_offset": 0, 00:15:14.605 "data_size": 0 00:15:14.605 } 00:15:14.605 ] 00:15:14.605 }' 00:15:14.605 12:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.605 12:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.173 12:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:15.173 12:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.173 12:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.173 [2024-11-25 12:14:10.984606] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:15.173 [2024-11-25 12:14:10.984672] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:15.173 12:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.173 12:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:15.173 12:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.173 12:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.173 [2024-11-25 12:14:10.992668] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:15.173 [2024-11-25 12:14:10.995240] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:15.173 [2024-11-25 12:14:10.995296] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:15.173 [2024-11-25 12:14:10.995312] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:15.173 [2024-11-25 12:14:10.995330] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:15.173 [2024-11-25 12:14:10.995360] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:15.173 [2024-11-25 12:14:10.995377] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:15.173 12:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.173 12:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:15.173 12:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:15.173 12:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:15.173 12:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:15.173 12:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:15.173 12:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:15.173 12:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:15.173 12:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:15.173 12:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.173 12:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.173 12:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.173 12:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.173 12:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.173 12:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:15.173 12:14:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.173 12:14:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.173 12:14:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.173 12:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.173 "name": "Existed_Raid", 00:15:15.173 "uuid": "10ec8c4f-0be9-4cb0-8af7-b17e93676d4e", 00:15:15.173 "strip_size_kb": 64, 00:15:15.173 "state": "configuring", 00:15:15.173 "raid_level": "raid0", 00:15:15.173 "superblock": true, 00:15:15.173 "num_base_bdevs": 4, 00:15:15.173 "num_base_bdevs_discovered": 1, 00:15:15.173 "num_base_bdevs_operational": 4, 00:15:15.173 "base_bdevs_list": [ 00:15:15.173 { 00:15:15.173 "name": "BaseBdev1", 00:15:15.173 "uuid": "bdcec644-bfe4-49ae-b7c2-1690bb770ebf", 00:15:15.173 "is_configured": true, 00:15:15.173 "data_offset": 2048, 00:15:15.173 "data_size": 63488 00:15:15.173 }, 00:15:15.173 { 00:15:15.173 "name": "BaseBdev2", 00:15:15.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.173 "is_configured": false, 00:15:15.173 "data_offset": 0, 00:15:15.173 "data_size": 0 00:15:15.173 }, 00:15:15.173 { 00:15:15.173 "name": "BaseBdev3", 00:15:15.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.173 "is_configured": false, 00:15:15.173 "data_offset": 0, 00:15:15.173 "data_size": 0 00:15:15.173 }, 00:15:15.173 { 00:15:15.173 "name": "BaseBdev4", 00:15:15.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.173 "is_configured": false, 00:15:15.173 "data_offset": 0, 00:15:15.173 "data_size": 0 00:15:15.173 } 00:15:15.173 ] 00:15:15.173 }' 00:15:15.173 12:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.173 12:14:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.432 12:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:15.432 12:14:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.432 12:14:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.432 [2024-11-25 12:14:11.518933] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:15.432 BaseBdev2 00:15:15.691 12:14:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.691 12:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:15.691 12:14:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:15.691 12:14:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:15.691 12:14:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:15.691 12:14:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:15.691 12:14:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:15.691 12:14:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:15.691 12:14:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.691 12:14:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.691 12:14:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.691 12:14:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:15.691 12:14:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.691 12:14:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.691 [ 00:15:15.691 { 00:15:15.691 "name": "BaseBdev2", 00:15:15.691 "aliases": [ 00:15:15.691 "adc26038-3522-4271-81d3-c8f910afe337" 00:15:15.691 ], 00:15:15.691 "product_name": "Malloc disk", 00:15:15.691 "block_size": 512, 00:15:15.691 "num_blocks": 65536, 00:15:15.691 "uuid": "adc26038-3522-4271-81d3-c8f910afe337", 00:15:15.691 "assigned_rate_limits": { 00:15:15.691 "rw_ios_per_sec": 0, 00:15:15.691 "rw_mbytes_per_sec": 0, 00:15:15.691 "r_mbytes_per_sec": 0, 00:15:15.691 "w_mbytes_per_sec": 0 00:15:15.691 }, 00:15:15.691 "claimed": true, 00:15:15.691 "claim_type": "exclusive_write", 00:15:15.691 "zoned": false, 00:15:15.691 "supported_io_types": { 00:15:15.691 "read": true, 00:15:15.691 "write": true, 00:15:15.691 "unmap": true, 00:15:15.691 "flush": true, 00:15:15.691 "reset": true, 00:15:15.691 "nvme_admin": false, 00:15:15.691 "nvme_io": false, 00:15:15.691 "nvme_io_md": false, 00:15:15.691 "write_zeroes": true, 00:15:15.691 "zcopy": true, 00:15:15.691 "get_zone_info": false, 00:15:15.691 "zone_management": false, 00:15:15.691 "zone_append": false, 00:15:15.691 "compare": false, 00:15:15.691 "compare_and_write": false, 00:15:15.691 "abort": true, 00:15:15.691 "seek_hole": false, 00:15:15.691 "seek_data": false, 00:15:15.691 "copy": true, 00:15:15.691 "nvme_iov_md": false 00:15:15.691 }, 00:15:15.691 "memory_domains": [ 00:15:15.691 { 00:15:15.691 "dma_device_id": "system", 00:15:15.691 "dma_device_type": 1 00:15:15.691 }, 00:15:15.691 { 00:15:15.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:15.691 "dma_device_type": 2 00:15:15.691 } 00:15:15.691 ], 00:15:15.691 "driver_specific": {} 00:15:15.691 } 00:15:15.691 ] 00:15:15.691 12:14:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.691 12:14:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:15.691 12:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:15.691 12:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:15.691 12:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:15.691 12:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:15.691 12:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:15.691 12:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:15.691 12:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:15.691 12:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:15.691 12:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.691 12:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.692 12:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.692 12:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.692 12:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.692 12:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:15.692 12:14:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.692 12:14:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.692 12:14:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.692 12:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.692 "name": "Existed_Raid", 00:15:15.692 "uuid": "10ec8c4f-0be9-4cb0-8af7-b17e93676d4e", 00:15:15.692 "strip_size_kb": 64, 00:15:15.692 "state": "configuring", 00:15:15.692 "raid_level": "raid0", 00:15:15.692 "superblock": true, 00:15:15.692 "num_base_bdevs": 4, 00:15:15.692 "num_base_bdevs_discovered": 2, 00:15:15.692 "num_base_bdevs_operational": 4, 00:15:15.692 "base_bdevs_list": [ 00:15:15.692 { 00:15:15.692 "name": "BaseBdev1", 00:15:15.692 "uuid": "bdcec644-bfe4-49ae-b7c2-1690bb770ebf", 00:15:15.692 "is_configured": true, 00:15:15.692 "data_offset": 2048, 00:15:15.692 "data_size": 63488 00:15:15.692 }, 00:15:15.692 { 00:15:15.692 "name": "BaseBdev2", 00:15:15.692 "uuid": "adc26038-3522-4271-81d3-c8f910afe337", 00:15:15.692 "is_configured": true, 00:15:15.692 "data_offset": 2048, 00:15:15.692 "data_size": 63488 00:15:15.692 }, 00:15:15.692 { 00:15:15.692 "name": "BaseBdev3", 00:15:15.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.692 "is_configured": false, 00:15:15.692 "data_offset": 0, 00:15:15.692 "data_size": 0 00:15:15.692 }, 00:15:15.692 { 00:15:15.692 "name": "BaseBdev4", 00:15:15.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.692 "is_configured": false, 00:15:15.692 "data_offset": 0, 00:15:15.692 "data_size": 0 00:15:15.692 } 00:15:15.692 ] 00:15:15.692 }' 00:15:15.692 12:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.692 12:14:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.259 12:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:16.259 12:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.259 12:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.259 [2024-11-25 12:14:12.129474] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:16.259 BaseBdev3 00:15:16.259 12:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.259 12:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:16.259 12:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:16.259 12:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:16.259 12:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:16.259 12:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:16.259 12:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:16.259 12:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:16.259 12:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.259 12:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.259 12:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.259 12:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:16.259 12:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.259 12:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.259 [ 00:15:16.259 { 00:15:16.259 "name": "BaseBdev3", 00:15:16.259 "aliases": [ 00:15:16.259 "9447c459-7d03-4504-ac80-395722769ab6" 00:15:16.259 ], 00:15:16.259 "product_name": "Malloc disk", 00:15:16.259 "block_size": 512, 00:15:16.259 "num_blocks": 65536, 00:15:16.259 "uuid": "9447c459-7d03-4504-ac80-395722769ab6", 00:15:16.259 "assigned_rate_limits": { 00:15:16.259 "rw_ios_per_sec": 0, 00:15:16.259 "rw_mbytes_per_sec": 0, 00:15:16.259 "r_mbytes_per_sec": 0, 00:15:16.259 "w_mbytes_per_sec": 0 00:15:16.259 }, 00:15:16.259 "claimed": true, 00:15:16.259 "claim_type": "exclusive_write", 00:15:16.259 "zoned": false, 00:15:16.259 "supported_io_types": { 00:15:16.259 "read": true, 00:15:16.259 "write": true, 00:15:16.259 "unmap": true, 00:15:16.259 "flush": true, 00:15:16.259 "reset": true, 00:15:16.259 "nvme_admin": false, 00:15:16.259 "nvme_io": false, 00:15:16.259 "nvme_io_md": false, 00:15:16.259 "write_zeroes": true, 00:15:16.259 "zcopy": true, 00:15:16.259 "get_zone_info": false, 00:15:16.259 "zone_management": false, 00:15:16.259 "zone_append": false, 00:15:16.259 "compare": false, 00:15:16.259 "compare_and_write": false, 00:15:16.259 "abort": true, 00:15:16.259 "seek_hole": false, 00:15:16.259 "seek_data": false, 00:15:16.259 "copy": true, 00:15:16.259 "nvme_iov_md": false 00:15:16.259 }, 00:15:16.259 "memory_domains": [ 00:15:16.259 { 00:15:16.259 "dma_device_id": "system", 00:15:16.259 "dma_device_type": 1 00:15:16.259 }, 00:15:16.259 { 00:15:16.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:16.259 "dma_device_type": 2 00:15:16.259 } 00:15:16.259 ], 00:15:16.259 "driver_specific": {} 00:15:16.259 } 00:15:16.259 ] 00:15:16.259 12:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.259 12:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:16.259 12:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:16.259 12:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:16.259 12:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:16.259 12:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:16.259 12:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:16.259 12:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:16.259 12:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:16.259 12:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:16.259 12:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.259 12:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.259 12:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.259 12:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.259 12:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.259 12:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.259 12:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.259 12:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:16.259 12:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.259 12:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.259 "name": "Existed_Raid", 00:15:16.260 "uuid": "10ec8c4f-0be9-4cb0-8af7-b17e93676d4e", 00:15:16.260 "strip_size_kb": 64, 00:15:16.260 "state": "configuring", 00:15:16.260 "raid_level": "raid0", 00:15:16.260 "superblock": true, 00:15:16.260 "num_base_bdevs": 4, 00:15:16.260 "num_base_bdevs_discovered": 3, 00:15:16.260 "num_base_bdevs_operational": 4, 00:15:16.260 "base_bdevs_list": [ 00:15:16.260 { 00:15:16.260 "name": "BaseBdev1", 00:15:16.260 "uuid": "bdcec644-bfe4-49ae-b7c2-1690bb770ebf", 00:15:16.260 "is_configured": true, 00:15:16.260 "data_offset": 2048, 00:15:16.260 "data_size": 63488 00:15:16.260 }, 00:15:16.260 { 00:15:16.260 "name": "BaseBdev2", 00:15:16.260 "uuid": "adc26038-3522-4271-81d3-c8f910afe337", 00:15:16.260 "is_configured": true, 00:15:16.260 "data_offset": 2048, 00:15:16.260 "data_size": 63488 00:15:16.260 }, 00:15:16.260 { 00:15:16.260 "name": "BaseBdev3", 00:15:16.260 "uuid": "9447c459-7d03-4504-ac80-395722769ab6", 00:15:16.260 "is_configured": true, 00:15:16.260 "data_offset": 2048, 00:15:16.260 "data_size": 63488 00:15:16.260 }, 00:15:16.260 { 00:15:16.260 "name": "BaseBdev4", 00:15:16.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.260 "is_configured": false, 00:15:16.260 "data_offset": 0, 00:15:16.260 "data_size": 0 00:15:16.260 } 00:15:16.260 ] 00:15:16.260 }' 00:15:16.260 12:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.260 12:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.547 12:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:16.547 12:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.547 12:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.811 [2024-11-25 12:14:12.656070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:16.811 [2024-11-25 12:14:12.656460] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:16.811 [2024-11-25 12:14:12.656493] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:16.811 BaseBdev4 00:15:16.811 [2024-11-25 12:14:12.656847] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:16.811 [2024-11-25 12:14:12.657078] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:16.811 [2024-11-25 12:14:12.657101] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:16.811 [2024-11-25 12:14:12.657302] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:16.811 12:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.811 12:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:16.811 12:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:16.811 12:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:16.811 12:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:16.811 12:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:16.811 12:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:16.811 12:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:16.811 12:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.811 12:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.811 12:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.811 12:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:16.811 12:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.812 12:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.812 [ 00:15:16.812 { 00:15:16.812 "name": "BaseBdev4", 00:15:16.812 "aliases": [ 00:15:16.812 "431dd0bb-6c93-41d0-9434-806e22bbef98" 00:15:16.812 ], 00:15:16.812 "product_name": "Malloc disk", 00:15:16.812 "block_size": 512, 00:15:16.812 "num_blocks": 65536, 00:15:16.812 "uuid": "431dd0bb-6c93-41d0-9434-806e22bbef98", 00:15:16.812 "assigned_rate_limits": { 00:15:16.812 "rw_ios_per_sec": 0, 00:15:16.812 "rw_mbytes_per_sec": 0, 00:15:16.812 "r_mbytes_per_sec": 0, 00:15:16.812 "w_mbytes_per_sec": 0 00:15:16.812 }, 00:15:16.812 "claimed": true, 00:15:16.812 "claim_type": "exclusive_write", 00:15:16.812 "zoned": false, 00:15:16.812 "supported_io_types": { 00:15:16.812 "read": true, 00:15:16.812 "write": true, 00:15:16.812 "unmap": true, 00:15:16.812 "flush": true, 00:15:16.812 "reset": true, 00:15:16.812 "nvme_admin": false, 00:15:16.812 "nvme_io": false, 00:15:16.812 "nvme_io_md": false, 00:15:16.812 "write_zeroes": true, 00:15:16.812 "zcopy": true, 00:15:16.812 "get_zone_info": false, 00:15:16.812 "zone_management": false, 00:15:16.812 "zone_append": false, 00:15:16.812 "compare": false, 00:15:16.812 "compare_and_write": false, 00:15:16.812 "abort": true, 00:15:16.812 "seek_hole": false, 00:15:16.812 "seek_data": false, 00:15:16.812 "copy": true, 00:15:16.812 "nvme_iov_md": false 00:15:16.812 }, 00:15:16.812 "memory_domains": [ 00:15:16.812 { 00:15:16.812 "dma_device_id": "system", 00:15:16.812 "dma_device_type": 1 00:15:16.812 }, 00:15:16.812 { 00:15:16.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:16.812 "dma_device_type": 2 00:15:16.812 } 00:15:16.812 ], 00:15:16.812 "driver_specific": {} 00:15:16.812 } 00:15:16.812 ] 00:15:16.812 12:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.812 12:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:16.812 12:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:16.812 12:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:16.812 12:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:15:16.812 12:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:16.812 12:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:16.812 12:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:16.812 12:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:16.812 12:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:16.812 12:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.812 12:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.812 12:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.812 12:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.812 12:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.812 12:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:16.812 12:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.812 12:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.812 12:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.812 12:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.812 "name": "Existed_Raid", 00:15:16.812 "uuid": "10ec8c4f-0be9-4cb0-8af7-b17e93676d4e", 00:15:16.812 "strip_size_kb": 64, 00:15:16.812 "state": "online", 00:15:16.812 "raid_level": "raid0", 00:15:16.812 "superblock": true, 00:15:16.812 "num_base_bdevs": 4, 00:15:16.812 "num_base_bdevs_discovered": 4, 00:15:16.812 "num_base_bdevs_operational": 4, 00:15:16.812 "base_bdevs_list": [ 00:15:16.812 { 00:15:16.812 "name": "BaseBdev1", 00:15:16.812 "uuid": "bdcec644-bfe4-49ae-b7c2-1690bb770ebf", 00:15:16.812 "is_configured": true, 00:15:16.812 "data_offset": 2048, 00:15:16.812 "data_size": 63488 00:15:16.812 }, 00:15:16.812 { 00:15:16.812 "name": "BaseBdev2", 00:15:16.812 "uuid": "adc26038-3522-4271-81d3-c8f910afe337", 00:15:16.812 "is_configured": true, 00:15:16.812 "data_offset": 2048, 00:15:16.812 "data_size": 63488 00:15:16.812 }, 00:15:16.812 { 00:15:16.812 "name": "BaseBdev3", 00:15:16.812 "uuid": "9447c459-7d03-4504-ac80-395722769ab6", 00:15:16.812 "is_configured": true, 00:15:16.812 "data_offset": 2048, 00:15:16.812 "data_size": 63488 00:15:16.812 }, 00:15:16.812 { 00:15:16.812 "name": "BaseBdev4", 00:15:16.812 "uuid": "431dd0bb-6c93-41d0-9434-806e22bbef98", 00:15:16.812 "is_configured": true, 00:15:16.812 "data_offset": 2048, 00:15:16.812 "data_size": 63488 00:15:16.812 } 00:15:16.812 ] 00:15:16.812 }' 00:15:16.812 12:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.812 12:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.378 12:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:17.378 12:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:17.378 12:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:17.378 12:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:17.378 12:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:17.378 12:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:17.378 12:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:17.378 12:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:17.378 12:14:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.378 12:14:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.378 [2024-11-25 12:14:13.192727] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:17.378 12:14:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.378 12:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:17.378 "name": "Existed_Raid", 00:15:17.378 "aliases": [ 00:15:17.378 "10ec8c4f-0be9-4cb0-8af7-b17e93676d4e" 00:15:17.378 ], 00:15:17.378 "product_name": "Raid Volume", 00:15:17.378 "block_size": 512, 00:15:17.378 "num_blocks": 253952, 00:15:17.378 "uuid": "10ec8c4f-0be9-4cb0-8af7-b17e93676d4e", 00:15:17.378 "assigned_rate_limits": { 00:15:17.378 "rw_ios_per_sec": 0, 00:15:17.378 "rw_mbytes_per_sec": 0, 00:15:17.378 "r_mbytes_per_sec": 0, 00:15:17.378 "w_mbytes_per_sec": 0 00:15:17.378 }, 00:15:17.378 "claimed": false, 00:15:17.378 "zoned": false, 00:15:17.378 "supported_io_types": { 00:15:17.378 "read": true, 00:15:17.378 "write": true, 00:15:17.378 "unmap": true, 00:15:17.378 "flush": true, 00:15:17.378 "reset": true, 00:15:17.378 "nvme_admin": false, 00:15:17.378 "nvme_io": false, 00:15:17.378 "nvme_io_md": false, 00:15:17.378 "write_zeroes": true, 00:15:17.378 "zcopy": false, 00:15:17.378 "get_zone_info": false, 00:15:17.378 "zone_management": false, 00:15:17.378 "zone_append": false, 00:15:17.378 "compare": false, 00:15:17.378 "compare_and_write": false, 00:15:17.378 "abort": false, 00:15:17.378 "seek_hole": false, 00:15:17.378 "seek_data": false, 00:15:17.378 "copy": false, 00:15:17.378 "nvme_iov_md": false 00:15:17.378 }, 00:15:17.378 "memory_domains": [ 00:15:17.378 { 00:15:17.378 "dma_device_id": "system", 00:15:17.378 "dma_device_type": 1 00:15:17.378 }, 00:15:17.378 { 00:15:17.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:17.378 "dma_device_type": 2 00:15:17.378 }, 00:15:17.378 { 00:15:17.378 "dma_device_id": "system", 00:15:17.378 "dma_device_type": 1 00:15:17.378 }, 00:15:17.378 { 00:15:17.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:17.378 "dma_device_type": 2 00:15:17.378 }, 00:15:17.378 { 00:15:17.378 "dma_device_id": "system", 00:15:17.378 "dma_device_type": 1 00:15:17.378 }, 00:15:17.378 { 00:15:17.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:17.378 "dma_device_type": 2 00:15:17.378 }, 00:15:17.378 { 00:15:17.378 "dma_device_id": "system", 00:15:17.378 "dma_device_type": 1 00:15:17.378 }, 00:15:17.378 { 00:15:17.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:17.378 "dma_device_type": 2 00:15:17.378 } 00:15:17.378 ], 00:15:17.378 "driver_specific": { 00:15:17.378 "raid": { 00:15:17.378 "uuid": "10ec8c4f-0be9-4cb0-8af7-b17e93676d4e", 00:15:17.378 "strip_size_kb": 64, 00:15:17.378 "state": "online", 00:15:17.378 "raid_level": "raid0", 00:15:17.378 "superblock": true, 00:15:17.378 "num_base_bdevs": 4, 00:15:17.378 "num_base_bdevs_discovered": 4, 00:15:17.378 "num_base_bdevs_operational": 4, 00:15:17.378 "base_bdevs_list": [ 00:15:17.378 { 00:15:17.378 "name": "BaseBdev1", 00:15:17.378 "uuid": "bdcec644-bfe4-49ae-b7c2-1690bb770ebf", 00:15:17.378 "is_configured": true, 00:15:17.378 "data_offset": 2048, 00:15:17.378 "data_size": 63488 00:15:17.378 }, 00:15:17.378 { 00:15:17.378 "name": "BaseBdev2", 00:15:17.378 "uuid": "adc26038-3522-4271-81d3-c8f910afe337", 00:15:17.378 "is_configured": true, 00:15:17.378 "data_offset": 2048, 00:15:17.378 "data_size": 63488 00:15:17.378 }, 00:15:17.378 { 00:15:17.378 "name": "BaseBdev3", 00:15:17.378 "uuid": "9447c459-7d03-4504-ac80-395722769ab6", 00:15:17.378 "is_configured": true, 00:15:17.378 "data_offset": 2048, 00:15:17.378 "data_size": 63488 00:15:17.378 }, 00:15:17.378 { 00:15:17.378 "name": "BaseBdev4", 00:15:17.378 "uuid": "431dd0bb-6c93-41d0-9434-806e22bbef98", 00:15:17.378 "is_configured": true, 00:15:17.378 "data_offset": 2048, 00:15:17.378 "data_size": 63488 00:15:17.378 } 00:15:17.378 ] 00:15:17.378 } 00:15:17.378 } 00:15:17.378 }' 00:15:17.378 12:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:17.378 12:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:17.378 BaseBdev2 00:15:17.378 BaseBdev3 00:15:17.378 BaseBdev4' 00:15:17.379 12:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:17.379 12:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:17.379 12:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:17.379 12:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:17.379 12:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:17.379 12:14:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.379 12:14:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.379 12:14:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.379 12:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:17.379 12:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:17.379 12:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:17.379 12:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:17.379 12:14:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.379 12:14:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.379 12:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:17.379 12:14:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.379 12:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:17.379 12:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:17.379 12:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:17.379 12:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:17.379 12:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:17.379 12:14:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.379 12:14:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.379 12:14:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.638 12:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:17.638 12:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:17.638 12:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:17.638 12:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:17.638 12:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:17.638 12:14:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.638 12:14:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.638 12:14:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.638 12:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:17.638 12:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:17.638 12:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:17.638 12:14:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.638 12:14:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.638 [2024-11-25 12:14:13.528479] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:17.638 [2024-11-25 12:14:13.528517] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:17.638 [2024-11-25 12:14:13.528584] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:17.638 12:14:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.638 12:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:17.638 12:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:15:17.638 12:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:17.638 12:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:15:17.638 12:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:15:17.638 12:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:15:17.638 12:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:17.638 12:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:15:17.638 12:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:17.638 12:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:17.638 12:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:17.638 12:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.638 12:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.638 12:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.638 12:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.638 12:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:17.638 12:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.638 12:14:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.638 12:14:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.638 12:14:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.638 12:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.638 "name": "Existed_Raid", 00:15:17.639 "uuid": "10ec8c4f-0be9-4cb0-8af7-b17e93676d4e", 00:15:17.639 "strip_size_kb": 64, 00:15:17.639 "state": "offline", 00:15:17.639 "raid_level": "raid0", 00:15:17.639 "superblock": true, 00:15:17.639 "num_base_bdevs": 4, 00:15:17.639 "num_base_bdevs_discovered": 3, 00:15:17.639 "num_base_bdevs_operational": 3, 00:15:17.639 "base_bdevs_list": [ 00:15:17.639 { 00:15:17.639 "name": null, 00:15:17.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.639 "is_configured": false, 00:15:17.639 "data_offset": 0, 00:15:17.639 "data_size": 63488 00:15:17.639 }, 00:15:17.639 { 00:15:17.639 "name": "BaseBdev2", 00:15:17.639 "uuid": "adc26038-3522-4271-81d3-c8f910afe337", 00:15:17.639 "is_configured": true, 00:15:17.639 "data_offset": 2048, 00:15:17.639 "data_size": 63488 00:15:17.639 }, 00:15:17.639 { 00:15:17.639 "name": "BaseBdev3", 00:15:17.639 "uuid": "9447c459-7d03-4504-ac80-395722769ab6", 00:15:17.639 "is_configured": true, 00:15:17.639 "data_offset": 2048, 00:15:17.639 "data_size": 63488 00:15:17.639 }, 00:15:17.639 { 00:15:17.639 "name": "BaseBdev4", 00:15:17.639 "uuid": "431dd0bb-6c93-41d0-9434-806e22bbef98", 00:15:17.639 "is_configured": true, 00:15:17.639 "data_offset": 2048, 00:15:17.639 "data_size": 63488 00:15:17.639 } 00:15:17.639 ] 00:15:17.639 }' 00:15:17.639 12:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.639 12:14:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.207 12:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:18.207 12:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:18.207 12:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:18.207 12:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.207 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.207 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.207 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.207 12:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:18.207 12:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:18.207 12:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:18.207 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.207 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.207 [2024-11-25 12:14:14.154186] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:18.207 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.207 12:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:18.207 12:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:18.207 12:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.207 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.207 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.207 12:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:18.207 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.467 12:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:18.467 12:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:18.467 12:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:18.467 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.467 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.467 [2024-11-25 12:14:14.315433] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:18.467 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.467 12:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:18.467 12:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:18.467 12:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.467 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.467 12:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:18.467 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.467 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.467 12:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:18.467 12:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:18.467 12:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:18.467 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.467 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.467 [2024-11-25 12:14:14.460661] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:18.467 [2024-11-25 12:14:14.460873] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:18.467 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.467 12:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:18.467 12:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:18.467 12:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.467 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.467 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.467 12:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:18.725 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.725 12:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:18.725 12:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:18.725 12:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:18.725 12:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:18.725 12:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:18.725 12:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:18.725 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.725 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.725 BaseBdev2 00:15:18.725 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.725 12:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:18.725 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:18.725 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:18.726 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:18.726 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:18.726 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:18.726 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:18.726 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.726 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.726 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.726 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:18.726 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.726 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.726 [ 00:15:18.726 { 00:15:18.726 "name": "BaseBdev2", 00:15:18.726 "aliases": [ 00:15:18.726 "d6b2966c-6bfd-4964-a0cd-6c663a40745a" 00:15:18.726 ], 00:15:18.726 "product_name": "Malloc disk", 00:15:18.726 "block_size": 512, 00:15:18.726 "num_blocks": 65536, 00:15:18.726 "uuid": "d6b2966c-6bfd-4964-a0cd-6c663a40745a", 00:15:18.726 "assigned_rate_limits": { 00:15:18.726 "rw_ios_per_sec": 0, 00:15:18.726 "rw_mbytes_per_sec": 0, 00:15:18.726 "r_mbytes_per_sec": 0, 00:15:18.726 "w_mbytes_per_sec": 0 00:15:18.726 }, 00:15:18.726 "claimed": false, 00:15:18.726 "zoned": false, 00:15:18.726 "supported_io_types": { 00:15:18.726 "read": true, 00:15:18.726 "write": true, 00:15:18.726 "unmap": true, 00:15:18.726 "flush": true, 00:15:18.726 "reset": true, 00:15:18.726 "nvme_admin": false, 00:15:18.726 "nvme_io": false, 00:15:18.726 "nvme_io_md": false, 00:15:18.726 "write_zeroes": true, 00:15:18.726 "zcopy": true, 00:15:18.726 "get_zone_info": false, 00:15:18.726 "zone_management": false, 00:15:18.726 "zone_append": false, 00:15:18.726 "compare": false, 00:15:18.726 "compare_and_write": false, 00:15:18.726 "abort": true, 00:15:18.726 "seek_hole": false, 00:15:18.726 "seek_data": false, 00:15:18.726 "copy": true, 00:15:18.726 "nvme_iov_md": false 00:15:18.726 }, 00:15:18.726 "memory_domains": [ 00:15:18.726 { 00:15:18.726 "dma_device_id": "system", 00:15:18.726 "dma_device_type": 1 00:15:18.726 }, 00:15:18.726 { 00:15:18.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:18.726 "dma_device_type": 2 00:15:18.726 } 00:15:18.726 ], 00:15:18.726 "driver_specific": {} 00:15:18.726 } 00:15:18.726 ] 00:15:18.726 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.726 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:18.726 12:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:18.726 12:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:18.726 12:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:18.726 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.726 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.726 BaseBdev3 00:15:18.726 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.726 12:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:18.726 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:18.726 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:18.726 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:18.726 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:18.726 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:18.726 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:18.726 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.726 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.726 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.726 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:18.726 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.726 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.726 [ 00:15:18.726 { 00:15:18.726 "name": "BaseBdev3", 00:15:18.726 "aliases": [ 00:15:18.726 "82376ff5-9d45-4b4a-ba36-6c3fdc476448" 00:15:18.726 ], 00:15:18.726 "product_name": "Malloc disk", 00:15:18.726 "block_size": 512, 00:15:18.726 "num_blocks": 65536, 00:15:18.726 "uuid": "82376ff5-9d45-4b4a-ba36-6c3fdc476448", 00:15:18.726 "assigned_rate_limits": { 00:15:18.726 "rw_ios_per_sec": 0, 00:15:18.726 "rw_mbytes_per_sec": 0, 00:15:18.726 "r_mbytes_per_sec": 0, 00:15:18.726 "w_mbytes_per_sec": 0 00:15:18.726 }, 00:15:18.726 "claimed": false, 00:15:18.726 "zoned": false, 00:15:18.726 "supported_io_types": { 00:15:18.726 "read": true, 00:15:18.726 "write": true, 00:15:18.726 "unmap": true, 00:15:18.726 "flush": true, 00:15:18.726 "reset": true, 00:15:18.726 "nvme_admin": false, 00:15:18.726 "nvme_io": false, 00:15:18.726 "nvme_io_md": false, 00:15:18.726 "write_zeroes": true, 00:15:18.726 "zcopy": true, 00:15:18.726 "get_zone_info": false, 00:15:18.726 "zone_management": false, 00:15:18.726 "zone_append": false, 00:15:18.726 "compare": false, 00:15:18.726 "compare_and_write": false, 00:15:18.726 "abort": true, 00:15:18.726 "seek_hole": false, 00:15:18.726 "seek_data": false, 00:15:18.726 "copy": true, 00:15:18.726 "nvme_iov_md": false 00:15:18.726 }, 00:15:18.726 "memory_domains": [ 00:15:18.726 { 00:15:18.726 "dma_device_id": "system", 00:15:18.726 "dma_device_type": 1 00:15:18.726 }, 00:15:18.726 { 00:15:18.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:18.726 "dma_device_type": 2 00:15:18.726 } 00:15:18.726 ], 00:15:18.726 "driver_specific": {} 00:15:18.726 } 00:15:18.726 ] 00:15:18.726 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.726 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:18.726 12:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:18.726 12:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:18.726 12:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:18.726 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.726 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.726 BaseBdev4 00:15:18.726 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.726 12:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:18.726 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:18.726 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:18.726 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:18.726 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:18.726 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:18.726 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:18.726 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.726 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.726 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.726 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:18.726 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.726 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.726 [ 00:15:18.726 { 00:15:18.726 "name": "BaseBdev4", 00:15:18.726 "aliases": [ 00:15:18.726 "f76e8b41-7cfc-4b1b-94fc-8cb27dd9e76a" 00:15:18.726 ], 00:15:18.726 "product_name": "Malloc disk", 00:15:18.726 "block_size": 512, 00:15:18.726 "num_blocks": 65536, 00:15:18.726 "uuid": "f76e8b41-7cfc-4b1b-94fc-8cb27dd9e76a", 00:15:18.726 "assigned_rate_limits": { 00:15:18.726 "rw_ios_per_sec": 0, 00:15:18.726 "rw_mbytes_per_sec": 0, 00:15:18.726 "r_mbytes_per_sec": 0, 00:15:18.726 "w_mbytes_per_sec": 0 00:15:18.726 }, 00:15:18.726 "claimed": false, 00:15:18.726 "zoned": false, 00:15:18.726 "supported_io_types": { 00:15:18.726 "read": true, 00:15:18.726 "write": true, 00:15:18.726 "unmap": true, 00:15:18.726 "flush": true, 00:15:18.726 "reset": true, 00:15:18.726 "nvme_admin": false, 00:15:18.726 "nvme_io": false, 00:15:18.726 "nvme_io_md": false, 00:15:18.726 "write_zeroes": true, 00:15:18.726 "zcopy": true, 00:15:18.726 "get_zone_info": false, 00:15:18.726 "zone_management": false, 00:15:18.726 "zone_append": false, 00:15:18.726 "compare": false, 00:15:18.726 "compare_and_write": false, 00:15:18.726 "abort": true, 00:15:18.726 "seek_hole": false, 00:15:18.726 "seek_data": false, 00:15:18.726 "copy": true, 00:15:18.726 "nvme_iov_md": false 00:15:18.726 }, 00:15:18.726 "memory_domains": [ 00:15:18.726 { 00:15:18.726 "dma_device_id": "system", 00:15:18.726 "dma_device_type": 1 00:15:18.726 }, 00:15:18.726 { 00:15:18.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:18.985 "dma_device_type": 2 00:15:18.985 } 00:15:18.985 ], 00:15:18.985 "driver_specific": {} 00:15:18.985 } 00:15:18.985 ] 00:15:18.985 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.985 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:18.985 12:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:18.985 12:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:18.985 12:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:18.985 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.985 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.985 [2024-11-25 12:14:14.820199] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:18.985 [2024-11-25 12:14:14.820269] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:18.985 [2024-11-25 12:14:14.820305] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:18.985 [2024-11-25 12:14:14.823063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:18.985 [2024-11-25 12:14:14.823142] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:18.985 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.985 12:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:18.985 12:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:18.985 12:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:18.985 12:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:18.985 12:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:18.985 12:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:18.985 12:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.985 12:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.985 12:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.985 12:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.985 12:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.985 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.985 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.985 12:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:18.985 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.985 12:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.985 "name": "Existed_Raid", 00:15:18.985 "uuid": "2ea86f4a-e382-482a-a264-ee3b2a5c42f0", 00:15:18.985 "strip_size_kb": 64, 00:15:18.985 "state": "configuring", 00:15:18.985 "raid_level": "raid0", 00:15:18.985 "superblock": true, 00:15:18.985 "num_base_bdevs": 4, 00:15:18.985 "num_base_bdevs_discovered": 3, 00:15:18.985 "num_base_bdevs_operational": 4, 00:15:18.985 "base_bdevs_list": [ 00:15:18.985 { 00:15:18.985 "name": "BaseBdev1", 00:15:18.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.985 "is_configured": false, 00:15:18.985 "data_offset": 0, 00:15:18.985 "data_size": 0 00:15:18.985 }, 00:15:18.985 { 00:15:18.985 "name": "BaseBdev2", 00:15:18.985 "uuid": "d6b2966c-6bfd-4964-a0cd-6c663a40745a", 00:15:18.985 "is_configured": true, 00:15:18.985 "data_offset": 2048, 00:15:18.985 "data_size": 63488 00:15:18.985 }, 00:15:18.985 { 00:15:18.985 "name": "BaseBdev3", 00:15:18.985 "uuid": "82376ff5-9d45-4b4a-ba36-6c3fdc476448", 00:15:18.985 "is_configured": true, 00:15:18.985 "data_offset": 2048, 00:15:18.985 "data_size": 63488 00:15:18.985 }, 00:15:18.985 { 00:15:18.986 "name": "BaseBdev4", 00:15:18.986 "uuid": "f76e8b41-7cfc-4b1b-94fc-8cb27dd9e76a", 00:15:18.986 "is_configured": true, 00:15:18.986 "data_offset": 2048, 00:15:18.986 "data_size": 63488 00:15:18.986 } 00:15:18.986 ] 00:15:18.986 }' 00:15:18.986 12:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.986 12:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.244 12:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:19.244 12:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.244 12:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.502 [2024-11-25 12:14:15.332327] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:19.502 12:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.502 12:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:19.502 12:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:19.502 12:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:19.502 12:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:19.502 12:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:19.502 12:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:19.502 12:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.502 12:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.502 12:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.502 12:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.502 12:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.502 12:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:19.502 12:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.502 12:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.502 12:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.502 12:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.502 "name": "Existed_Raid", 00:15:19.502 "uuid": "2ea86f4a-e382-482a-a264-ee3b2a5c42f0", 00:15:19.502 "strip_size_kb": 64, 00:15:19.502 "state": "configuring", 00:15:19.502 "raid_level": "raid0", 00:15:19.502 "superblock": true, 00:15:19.502 "num_base_bdevs": 4, 00:15:19.502 "num_base_bdevs_discovered": 2, 00:15:19.502 "num_base_bdevs_operational": 4, 00:15:19.502 "base_bdevs_list": [ 00:15:19.502 { 00:15:19.502 "name": "BaseBdev1", 00:15:19.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.502 "is_configured": false, 00:15:19.502 "data_offset": 0, 00:15:19.502 "data_size": 0 00:15:19.502 }, 00:15:19.502 { 00:15:19.502 "name": null, 00:15:19.502 "uuid": "d6b2966c-6bfd-4964-a0cd-6c663a40745a", 00:15:19.502 "is_configured": false, 00:15:19.502 "data_offset": 0, 00:15:19.502 "data_size": 63488 00:15:19.502 }, 00:15:19.502 { 00:15:19.502 "name": "BaseBdev3", 00:15:19.502 "uuid": "82376ff5-9d45-4b4a-ba36-6c3fdc476448", 00:15:19.502 "is_configured": true, 00:15:19.502 "data_offset": 2048, 00:15:19.502 "data_size": 63488 00:15:19.502 }, 00:15:19.502 { 00:15:19.502 "name": "BaseBdev4", 00:15:19.502 "uuid": "f76e8b41-7cfc-4b1b-94fc-8cb27dd9e76a", 00:15:19.503 "is_configured": true, 00:15:19.503 "data_offset": 2048, 00:15:19.503 "data_size": 63488 00:15:19.503 } 00:15:19.503 ] 00:15:19.503 }' 00:15:19.503 12:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.503 12:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.761 12:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.761 12:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:19.761 12:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.761 12:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.761 12:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.019 12:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:20.019 12:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:20.019 12:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.019 12:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.019 [2024-11-25 12:14:15.909950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:20.019 BaseBdev1 00:15:20.019 12:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.019 12:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:20.019 12:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:20.019 12:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:20.019 12:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:20.019 12:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:20.019 12:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:20.019 12:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:20.019 12:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.019 12:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.019 12:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.019 12:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:20.019 12:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.019 12:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.019 [ 00:15:20.019 { 00:15:20.019 "name": "BaseBdev1", 00:15:20.019 "aliases": [ 00:15:20.019 "7b0416e5-f110-4cde-9193-c82e7f3e1274" 00:15:20.019 ], 00:15:20.019 "product_name": "Malloc disk", 00:15:20.019 "block_size": 512, 00:15:20.019 "num_blocks": 65536, 00:15:20.019 "uuid": "7b0416e5-f110-4cde-9193-c82e7f3e1274", 00:15:20.019 "assigned_rate_limits": { 00:15:20.019 "rw_ios_per_sec": 0, 00:15:20.019 "rw_mbytes_per_sec": 0, 00:15:20.019 "r_mbytes_per_sec": 0, 00:15:20.019 "w_mbytes_per_sec": 0 00:15:20.019 }, 00:15:20.019 "claimed": true, 00:15:20.019 "claim_type": "exclusive_write", 00:15:20.019 "zoned": false, 00:15:20.019 "supported_io_types": { 00:15:20.019 "read": true, 00:15:20.019 "write": true, 00:15:20.019 "unmap": true, 00:15:20.019 "flush": true, 00:15:20.019 "reset": true, 00:15:20.019 "nvme_admin": false, 00:15:20.019 "nvme_io": false, 00:15:20.019 "nvme_io_md": false, 00:15:20.019 "write_zeroes": true, 00:15:20.019 "zcopy": true, 00:15:20.019 "get_zone_info": false, 00:15:20.019 "zone_management": false, 00:15:20.019 "zone_append": false, 00:15:20.019 "compare": false, 00:15:20.019 "compare_and_write": false, 00:15:20.019 "abort": true, 00:15:20.019 "seek_hole": false, 00:15:20.019 "seek_data": false, 00:15:20.019 "copy": true, 00:15:20.019 "nvme_iov_md": false 00:15:20.019 }, 00:15:20.019 "memory_domains": [ 00:15:20.019 { 00:15:20.019 "dma_device_id": "system", 00:15:20.019 "dma_device_type": 1 00:15:20.019 }, 00:15:20.019 { 00:15:20.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:20.019 "dma_device_type": 2 00:15:20.019 } 00:15:20.019 ], 00:15:20.019 "driver_specific": {} 00:15:20.019 } 00:15:20.019 ] 00:15:20.019 12:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.019 12:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:20.019 12:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:20.019 12:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:20.019 12:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:20.019 12:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:20.019 12:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:20.019 12:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:20.019 12:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.019 12:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.019 12:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.019 12:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.019 12:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.019 12:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:20.019 12:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.019 12:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.019 12:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.019 12:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.019 "name": "Existed_Raid", 00:15:20.019 "uuid": "2ea86f4a-e382-482a-a264-ee3b2a5c42f0", 00:15:20.019 "strip_size_kb": 64, 00:15:20.020 "state": "configuring", 00:15:20.020 "raid_level": "raid0", 00:15:20.020 "superblock": true, 00:15:20.020 "num_base_bdevs": 4, 00:15:20.020 "num_base_bdevs_discovered": 3, 00:15:20.020 "num_base_bdevs_operational": 4, 00:15:20.020 "base_bdevs_list": [ 00:15:20.020 { 00:15:20.020 "name": "BaseBdev1", 00:15:20.020 "uuid": "7b0416e5-f110-4cde-9193-c82e7f3e1274", 00:15:20.020 "is_configured": true, 00:15:20.020 "data_offset": 2048, 00:15:20.020 "data_size": 63488 00:15:20.020 }, 00:15:20.020 { 00:15:20.020 "name": null, 00:15:20.020 "uuid": "d6b2966c-6bfd-4964-a0cd-6c663a40745a", 00:15:20.020 "is_configured": false, 00:15:20.020 "data_offset": 0, 00:15:20.020 "data_size": 63488 00:15:20.020 }, 00:15:20.020 { 00:15:20.020 "name": "BaseBdev3", 00:15:20.020 "uuid": "82376ff5-9d45-4b4a-ba36-6c3fdc476448", 00:15:20.020 "is_configured": true, 00:15:20.020 "data_offset": 2048, 00:15:20.020 "data_size": 63488 00:15:20.020 }, 00:15:20.020 { 00:15:20.020 "name": "BaseBdev4", 00:15:20.020 "uuid": "f76e8b41-7cfc-4b1b-94fc-8cb27dd9e76a", 00:15:20.020 "is_configured": true, 00:15:20.020 "data_offset": 2048, 00:15:20.020 "data_size": 63488 00:15:20.020 } 00:15:20.020 ] 00:15:20.020 }' 00:15:20.020 12:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.020 12:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.633 12:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.633 12:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:20.633 12:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.633 12:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.633 12:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.633 12:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:20.633 12:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:20.633 12:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.633 12:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.633 [2024-11-25 12:14:16.482207] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:20.633 12:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.633 12:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:20.633 12:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:20.633 12:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:20.633 12:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:20.633 12:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:20.633 12:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:20.633 12:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.633 12:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.633 12:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.633 12:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.633 12:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.633 12:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:20.633 12:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.633 12:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.633 12:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.633 12:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.633 "name": "Existed_Raid", 00:15:20.633 "uuid": "2ea86f4a-e382-482a-a264-ee3b2a5c42f0", 00:15:20.633 "strip_size_kb": 64, 00:15:20.633 "state": "configuring", 00:15:20.633 "raid_level": "raid0", 00:15:20.633 "superblock": true, 00:15:20.633 "num_base_bdevs": 4, 00:15:20.633 "num_base_bdevs_discovered": 2, 00:15:20.633 "num_base_bdevs_operational": 4, 00:15:20.633 "base_bdevs_list": [ 00:15:20.633 { 00:15:20.633 "name": "BaseBdev1", 00:15:20.633 "uuid": "7b0416e5-f110-4cde-9193-c82e7f3e1274", 00:15:20.633 "is_configured": true, 00:15:20.633 "data_offset": 2048, 00:15:20.633 "data_size": 63488 00:15:20.633 }, 00:15:20.633 { 00:15:20.633 "name": null, 00:15:20.633 "uuid": "d6b2966c-6bfd-4964-a0cd-6c663a40745a", 00:15:20.633 "is_configured": false, 00:15:20.633 "data_offset": 0, 00:15:20.633 "data_size": 63488 00:15:20.633 }, 00:15:20.633 { 00:15:20.633 "name": null, 00:15:20.633 "uuid": "82376ff5-9d45-4b4a-ba36-6c3fdc476448", 00:15:20.633 "is_configured": false, 00:15:20.633 "data_offset": 0, 00:15:20.633 "data_size": 63488 00:15:20.633 }, 00:15:20.633 { 00:15:20.633 "name": "BaseBdev4", 00:15:20.633 "uuid": "f76e8b41-7cfc-4b1b-94fc-8cb27dd9e76a", 00:15:20.633 "is_configured": true, 00:15:20.633 "data_offset": 2048, 00:15:20.633 "data_size": 63488 00:15:20.633 } 00:15:20.633 ] 00:15:20.633 }' 00:15:20.633 12:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.633 12:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.892 12:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.892 12:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:20.892 12:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.892 12:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.892 12:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.150 12:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:21.150 12:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:21.150 12:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.150 12:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.150 [2024-11-25 12:14:17.002450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:21.150 12:14:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.150 12:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:21.150 12:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:21.150 12:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:21.150 12:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:21.150 12:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:21.150 12:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:21.150 12:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.150 12:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.150 12:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.150 12:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.150 12:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:21.150 12:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.150 12:14:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.150 12:14:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.150 12:14:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.150 12:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.150 "name": "Existed_Raid", 00:15:21.150 "uuid": "2ea86f4a-e382-482a-a264-ee3b2a5c42f0", 00:15:21.150 "strip_size_kb": 64, 00:15:21.150 "state": "configuring", 00:15:21.150 "raid_level": "raid0", 00:15:21.150 "superblock": true, 00:15:21.150 "num_base_bdevs": 4, 00:15:21.150 "num_base_bdevs_discovered": 3, 00:15:21.150 "num_base_bdevs_operational": 4, 00:15:21.150 "base_bdevs_list": [ 00:15:21.150 { 00:15:21.150 "name": "BaseBdev1", 00:15:21.150 "uuid": "7b0416e5-f110-4cde-9193-c82e7f3e1274", 00:15:21.150 "is_configured": true, 00:15:21.150 "data_offset": 2048, 00:15:21.150 "data_size": 63488 00:15:21.150 }, 00:15:21.150 { 00:15:21.150 "name": null, 00:15:21.150 "uuid": "d6b2966c-6bfd-4964-a0cd-6c663a40745a", 00:15:21.150 "is_configured": false, 00:15:21.150 "data_offset": 0, 00:15:21.150 "data_size": 63488 00:15:21.150 }, 00:15:21.150 { 00:15:21.150 "name": "BaseBdev3", 00:15:21.150 "uuid": "82376ff5-9d45-4b4a-ba36-6c3fdc476448", 00:15:21.150 "is_configured": true, 00:15:21.150 "data_offset": 2048, 00:15:21.150 "data_size": 63488 00:15:21.150 }, 00:15:21.150 { 00:15:21.150 "name": "BaseBdev4", 00:15:21.150 "uuid": "f76e8b41-7cfc-4b1b-94fc-8cb27dd9e76a", 00:15:21.150 "is_configured": true, 00:15:21.150 "data_offset": 2048, 00:15:21.150 "data_size": 63488 00:15:21.150 } 00:15:21.150 ] 00:15:21.150 }' 00:15:21.150 12:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.150 12:14:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.717 12:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.717 12:14:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.717 12:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:21.717 12:14:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.717 12:14:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.717 12:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:21.717 12:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:21.717 12:14:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.717 12:14:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.717 [2024-11-25 12:14:17.562666] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:21.717 12:14:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.717 12:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:21.717 12:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:21.717 12:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:21.717 12:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:21.717 12:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:21.717 12:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:21.717 12:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.717 12:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.717 12:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.717 12:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.717 12:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:21.717 12:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.717 12:14:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.717 12:14:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.717 12:14:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.717 12:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.717 "name": "Existed_Raid", 00:15:21.717 "uuid": "2ea86f4a-e382-482a-a264-ee3b2a5c42f0", 00:15:21.717 "strip_size_kb": 64, 00:15:21.717 "state": "configuring", 00:15:21.717 "raid_level": "raid0", 00:15:21.717 "superblock": true, 00:15:21.717 "num_base_bdevs": 4, 00:15:21.717 "num_base_bdevs_discovered": 2, 00:15:21.717 "num_base_bdevs_operational": 4, 00:15:21.717 "base_bdevs_list": [ 00:15:21.717 { 00:15:21.717 "name": null, 00:15:21.717 "uuid": "7b0416e5-f110-4cde-9193-c82e7f3e1274", 00:15:21.717 "is_configured": false, 00:15:21.717 "data_offset": 0, 00:15:21.717 "data_size": 63488 00:15:21.717 }, 00:15:21.717 { 00:15:21.717 "name": null, 00:15:21.717 "uuid": "d6b2966c-6bfd-4964-a0cd-6c663a40745a", 00:15:21.717 "is_configured": false, 00:15:21.717 "data_offset": 0, 00:15:21.717 "data_size": 63488 00:15:21.717 }, 00:15:21.717 { 00:15:21.717 "name": "BaseBdev3", 00:15:21.718 "uuid": "82376ff5-9d45-4b4a-ba36-6c3fdc476448", 00:15:21.718 "is_configured": true, 00:15:21.718 "data_offset": 2048, 00:15:21.718 "data_size": 63488 00:15:21.718 }, 00:15:21.718 { 00:15:21.718 "name": "BaseBdev4", 00:15:21.718 "uuid": "f76e8b41-7cfc-4b1b-94fc-8cb27dd9e76a", 00:15:21.718 "is_configured": true, 00:15:21.718 "data_offset": 2048, 00:15:21.718 "data_size": 63488 00:15:21.718 } 00:15:21.718 ] 00:15:21.718 }' 00:15:21.718 12:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.718 12:14:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.283 12:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:22.283 12:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.283 12:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.283 12:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.283 12:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.283 12:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:22.283 12:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:22.283 12:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.283 12:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.283 [2024-11-25 12:14:18.212153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:22.283 12:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.283 12:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:22.283 12:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:22.283 12:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:22.283 12:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:22.283 12:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:22.283 12:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:22.283 12:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.283 12:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.283 12:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.283 12:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.283 12:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.283 12:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.283 12:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.283 12:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:22.283 12:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.283 12:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.283 "name": "Existed_Raid", 00:15:22.283 "uuid": "2ea86f4a-e382-482a-a264-ee3b2a5c42f0", 00:15:22.283 "strip_size_kb": 64, 00:15:22.283 "state": "configuring", 00:15:22.283 "raid_level": "raid0", 00:15:22.283 "superblock": true, 00:15:22.283 "num_base_bdevs": 4, 00:15:22.283 "num_base_bdevs_discovered": 3, 00:15:22.283 "num_base_bdevs_operational": 4, 00:15:22.283 "base_bdevs_list": [ 00:15:22.283 { 00:15:22.283 "name": null, 00:15:22.283 "uuid": "7b0416e5-f110-4cde-9193-c82e7f3e1274", 00:15:22.283 "is_configured": false, 00:15:22.283 "data_offset": 0, 00:15:22.283 "data_size": 63488 00:15:22.284 }, 00:15:22.284 { 00:15:22.284 "name": "BaseBdev2", 00:15:22.284 "uuid": "d6b2966c-6bfd-4964-a0cd-6c663a40745a", 00:15:22.284 "is_configured": true, 00:15:22.284 "data_offset": 2048, 00:15:22.284 "data_size": 63488 00:15:22.284 }, 00:15:22.284 { 00:15:22.284 "name": "BaseBdev3", 00:15:22.284 "uuid": "82376ff5-9d45-4b4a-ba36-6c3fdc476448", 00:15:22.284 "is_configured": true, 00:15:22.284 "data_offset": 2048, 00:15:22.284 "data_size": 63488 00:15:22.284 }, 00:15:22.284 { 00:15:22.284 "name": "BaseBdev4", 00:15:22.284 "uuid": "f76e8b41-7cfc-4b1b-94fc-8cb27dd9e76a", 00:15:22.284 "is_configured": true, 00:15:22.284 "data_offset": 2048, 00:15:22.284 "data_size": 63488 00:15:22.284 } 00:15:22.284 ] 00:15:22.284 }' 00:15:22.284 12:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.284 12:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.850 12:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.850 12:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:22.850 12:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.850 12:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.850 12:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.850 12:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:22.850 12:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.850 12:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.850 12:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.850 12:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:22.850 12:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.850 12:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7b0416e5-f110-4cde-9193-c82e7f3e1274 00:15:22.850 12:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.850 12:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.850 [2024-11-25 12:14:18.837797] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:22.850 [2024-11-25 12:14:18.838090] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:22.850 [2024-11-25 12:14:18.838109] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:22.850 NewBaseBdev 00:15:22.850 [2024-11-25 12:14:18.838453] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:22.850 [2024-11-25 12:14:18.838634] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:22.850 [2024-11-25 12:14:18.838657] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:22.850 [2024-11-25 12:14:18.838812] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:22.850 12:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.850 12:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:22.850 12:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:22.850 12:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:22.850 12:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:22.850 12:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:22.850 12:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:22.850 12:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:22.850 12:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.850 12:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.850 12:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.850 12:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:22.850 12:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.850 12:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.850 [ 00:15:22.850 { 00:15:22.850 "name": "NewBaseBdev", 00:15:22.850 "aliases": [ 00:15:22.851 "7b0416e5-f110-4cde-9193-c82e7f3e1274" 00:15:22.851 ], 00:15:22.851 "product_name": "Malloc disk", 00:15:22.851 "block_size": 512, 00:15:22.851 "num_blocks": 65536, 00:15:22.851 "uuid": "7b0416e5-f110-4cde-9193-c82e7f3e1274", 00:15:22.851 "assigned_rate_limits": { 00:15:22.851 "rw_ios_per_sec": 0, 00:15:22.851 "rw_mbytes_per_sec": 0, 00:15:22.851 "r_mbytes_per_sec": 0, 00:15:22.851 "w_mbytes_per_sec": 0 00:15:22.851 }, 00:15:22.851 "claimed": true, 00:15:22.851 "claim_type": "exclusive_write", 00:15:22.851 "zoned": false, 00:15:22.851 "supported_io_types": { 00:15:22.851 "read": true, 00:15:22.851 "write": true, 00:15:22.851 "unmap": true, 00:15:22.851 "flush": true, 00:15:22.851 "reset": true, 00:15:22.851 "nvme_admin": false, 00:15:22.851 "nvme_io": false, 00:15:22.851 "nvme_io_md": false, 00:15:22.851 "write_zeroes": true, 00:15:22.851 "zcopy": true, 00:15:22.851 "get_zone_info": false, 00:15:22.851 "zone_management": false, 00:15:22.851 "zone_append": false, 00:15:22.851 "compare": false, 00:15:22.851 "compare_and_write": false, 00:15:22.851 "abort": true, 00:15:22.851 "seek_hole": false, 00:15:22.851 "seek_data": false, 00:15:22.851 "copy": true, 00:15:22.851 "nvme_iov_md": false 00:15:22.851 }, 00:15:22.851 "memory_domains": [ 00:15:22.851 { 00:15:22.851 "dma_device_id": "system", 00:15:22.851 "dma_device_type": 1 00:15:22.851 }, 00:15:22.851 { 00:15:22.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:22.851 "dma_device_type": 2 00:15:22.851 } 00:15:22.851 ], 00:15:22.851 "driver_specific": {} 00:15:22.851 } 00:15:22.851 ] 00:15:22.851 12:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.851 12:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:22.851 12:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:15:22.851 12:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:22.851 12:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:22.851 12:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:22.851 12:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:22.851 12:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:22.851 12:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.851 12:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.851 12:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.851 12:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.851 12:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.851 12:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:22.851 12:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.851 12:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.851 12:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.851 12:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.851 "name": "Existed_Raid", 00:15:22.851 "uuid": "2ea86f4a-e382-482a-a264-ee3b2a5c42f0", 00:15:22.851 "strip_size_kb": 64, 00:15:22.851 "state": "online", 00:15:22.851 "raid_level": "raid0", 00:15:22.851 "superblock": true, 00:15:22.851 "num_base_bdevs": 4, 00:15:22.851 "num_base_bdevs_discovered": 4, 00:15:22.851 "num_base_bdevs_operational": 4, 00:15:22.851 "base_bdevs_list": [ 00:15:22.851 { 00:15:22.851 "name": "NewBaseBdev", 00:15:22.851 "uuid": "7b0416e5-f110-4cde-9193-c82e7f3e1274", 00:15:22.851 "is_configured": true, 00:15:22.851 "data_offset": 2048, 00:15:22.851 "data_size": 63488 00:15:22.851 }, 00:15:22.851 { 00:15:22.851 "name": "BaseBdev2", 00:15:22.851 "uuid": "d6b2966c-6bfd-4964-a0cd-6c663a40745a", 00:15:22.851 "is_configured": true, 00:15:22.851 "data_offset": 2048, 00:15:22.851 "data_size": 63488 00:15:22.851 }, 00:15:22.851 { 00:15:22.851 "name": "BaseBdev3", 00:15:22.851 "uuid": "82376ff5-9d45-4b4a-ba36-6c3fdc476448", 00:15:22.851 "is_configured": true, 00:15:22.851 "data_offset": 2048, 00:15:22.851 "data_size": 63488 00:15:22.851 }, 00:15:22.851 { 00:15:22.851 "name": "BaseBdev4", 00:15:22.851 "uuid": "f76e8b41-7cfc-4b1b-94fc-8cb27dd9e76a", 00:15:22.851 "is_configured": true, 00:15:22.851 "data_offset": 2048, 00:15:22.851 "data_size": 63488 00:15:22.851 } 00:15:22.851 ] 00:15:22.851 }' 00:15:22.851 12:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.851 12:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.418 12:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:23.418 12:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:23.418 12:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:23.418 12:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:23.418 12:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:23.418 12:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:23.418 12:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:23.418 12:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:23.418 12:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.418 12:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.418 [2024-11-25 12:14:19.418470] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:23.418 12:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.418 12:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:23.418 "name": "Existed_Raid", 00:15:23.418 "aliases": [ 00:15:23.418 "2ea86f4a-e382-482a-a264-ee3b2a5c42f0" 00:15:23.418 ], 00:15:23.418 "product_name": "Raid Volume", 00:15:23.418 "block_size": 512, 00:15:23.418 "num_blocks": 253952, 00:15:23.418 "uuid": "2ea86f4a-e382-482a-a264-ee3b2a5c42f0", 00:15:23.418 "assigned_rate_limits": { 00:15:23.418 "rw_ios_per_sec": 0, 00:15:23.418 "rw_mbytes_per_sec": 0, 00:15:23.418 "r_mbytes_per_sec": 0, 00:15:23.418 "w_mbytes_per_sec": 0 00:15:23.418 }, 00:15:23.418 "claimed": false, 00:15:23.418 "zoned": false, 00:15:23.418 "supported_io_types": { 00:15:23.418 "read": true, 00:15:23.418 "write": true, 00:15:23.418 "unmap": true, 00:15:23.418 "flush": true, 00:15:23.418 "reset": true, 00:15:23.418 "nvme_admin": false, 00:15:23.418 "nvme_io": false, 00:15:23.418 "nvme_io_md": false, 00:15:23.418 "write_zeroes": true, 00:15:23.418 "zcopy": false, 00:15:23.418 "get_zone_info": false, 00:15:23.418 "zone_management": false, 00:15:23.418 "zone_append": false, 00:15:23.418 "compare": false, 00:15:23.418 "compare_and_write": false, 00:15:23.418 "abort": false, 00:15:23.418 "seek_hole": false, 00:15:23.418 "seek_data": false, 00:15:23.418 "copy": false, 00:15:23.418 "nvme_iov_md": false 00:15:23.418 }, 00:15:23.418 "memory_domains": [ 00:15:23.418 { 00:15:23.418 "dma_device_id": "system", 00:15:23.418 "dma_device_type": 1 00:15:23.418 }, 00:15:23.418 { 00:15:23.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:23.418 "dma_device_type": 2 00:15:23.418 }, 00:15:23.418 { 00:15:23.418 "dma_device_id": "system", 00:15:23.418 "dma_device_type": 1 00:15:23.418 }, 00:15:23.418 { 00:15:23.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:23.418 "dma_device_type": 2 00:15:23.418 }, 00:15:23.418 { 00:15:23.418 "dma_device_id": "system", 00:15:23.418 "dma_device_type": 1 00:15:23.418 }, 00:15:23.418 { 00:15:23.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:23.418 "dma_device_type": 2 00:15:23.418 }, 00:15:23.418 { 00:15:23.418 "dma_device_id": "system", 00:15:23.418 "dma_device_type": 1 00:15:23.418 }, 00:15:23.418 { 00:15:23.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:23.418 "dma_device_type": 2 00:15:23.418 } 00:15:23.418 ], 00:15:23.418 "driver_specific": { 00:15:23.418 "raid": { 00:15:23.418 "uuid": "2ea86f4a-e382-482a-a264-ee3b2a5c42f0", 00:15:23.418 "strip_size_kb": 64, 00:15:23.418 "state": "online", 00:15:23.418 "raid_level": "raid0", 00:15:23.418 "superblock": true, 00:15:23.418 "num_base_bdevs": 4, 00:15:23.418 "num_base_bdevs_discovered": 4, 00:15:23.418 "num_base_bdevs_operational": 4, 00:15:23.418 "base_bdevs_list": [ 00:15:23.418 { 00:15:23.418 "name": "NewBaseBdev", 00:15:23.418 "uuid": "7b0416e5-f110-4cde-9193-c82e7f3e1274", 00:15:23.418 "is_configured": true, 00:15:23.418 "data_offset": 2048, 00:15:23.418 "data_size": 63488 00:15:23.418 }, 00:15:23.418 { 00:15:23.418 "name": "BaseBdev2", 00:15:23.418 "uuid": "d6b2966c-6bfd-4964-a0cd-6c663a40745a", 00:15:23.418 "is_configured": true, 00:15:23.418 "data_offset": 2048, 00:15:23.418 "data_size": 63488 00:15:23.418 }, 00:15:23.418 { 00:15:23.418 "name": "BaseBdev3", 00:15:23.418 "uuid": "82376ff5-9d45-4b4a-ba36-6c3fdc476448", 00:15:23.418 "is_configured": true, 00:15:23.418 "data_offset": 2048, 00:15:23.418 "data_size": 63488 00:15:23.418 }, 00:15:23.418 { 00:15:23.418 "name": "BaseBdev4", 00:15:23.418 "uuid": "f76e8b41-7cfc-4b1b-94fc-8cb27dd9e76a", 00:15:23.418 "is_configured": true, 00:15:23.418 "data_offset": 2048, 00:15:23.418 "data_size": 63488 00:15:23.418 } 00:15:23.418 ] 00:15:23.418 } 00:15:23.418 } 00:15:23.418 }' 00:15:23.418 12:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:23.677 12:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:23.677 BaseBdev2 00:15:23.677 BaseBdev3 00:15:23.677 BaseBdev4' 00:15:23.677 12:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:23.677 12:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:23.677 12:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:23.677 12:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:23.677 12:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:23.677 12:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.677 12:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.677 12:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.677 12:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:23.677 12:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:23.677 12:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:23.677 12:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:23.677 12:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:23.677 12:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.677 12:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.678 12:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.678 12:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:23.678 12:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:23.678 12:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:23.678 12:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:23.678 12:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.678 12:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.678 12:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:23.678 12:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.678 12:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:23.678 12:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:23.678 12:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:23.678 12:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:23.678 12:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.678 12:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.678 12:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:23.678 12:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.678 12:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:23.678 12:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:23.678 12:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:23.678 12:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.678 12:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.678 [2024-11-25 12:14:19.758075] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:23.678 [2024-11-25 12:14:19.758114] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:23.678 [2024-11-25 12:14:19.758213] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:23.678 [2024-11-25 12:14:19.758305] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:23.678 [2024-11-25 12:14:19.758321] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:23.678 12:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.678 12:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70154 00:15:23.678 12:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 70154 ']' 00:15:23.678 12:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 70154 00:15:23.678 12:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:23.936 12:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:23.936 12:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70154 00:15:23.936 killing process with pid 70154 00:15:23.936 12:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:23.936 12:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:23.936 12:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70154' 00:15:23.936 12:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 70154 00:15:23.936 [2024-11-25 12:14:19.794401] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:23.936 12:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 70154 00:15:24.195 [2024-11-25 12:14:20.146686] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:25.203 12:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:25.203 00:15:25.203 real 0m12.365s 00:15:25.203 user 0m20.474s 00:15:25.203 sys 0m1.654s 00:15:25.203 12:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:25.203 ************************************ 00:15:25.203 END TEST raid_state_function_test_sb 00:15:25.203 ************************************ 00:15:25.203 12:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.203 12:14:21 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:15:25.203 12:14:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:25.203 12:14:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:25.203 12:14:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:25.203 ************************************ 00:15:25.203 START TEST raid_superblock_test 00:15:25.203 ************************************ 00:15:25.203 12:14:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:15:25.203 12:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:15:25.203 12:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:15:25.203 12:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:25.203 12:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:25.203 12:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:25.203 12:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:25.203 12:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:25.203 12:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:25.203 12:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:25.203 12:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:25.203 12:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:25.203 12:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:25.204 12:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:25.204 12:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:15:25.204 12:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:25.204 12:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:25.204 12:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70837 00:15:25.204 12:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:25.204 12:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70837 00:15:25.204 12:14:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 70837 ']' 00:15:25.204 12:14:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:25.204 12:14:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:25.204 12:14:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:25.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:25.204 12:14:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:25.204 12:14:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.462 [2024-11-25 12:14:21.318413] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:15:25.462 [2024-11-25 12:14:21.318775] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70837 ] 00:15:25.462 [2024-11-25 12:14:21.489477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.720 [2024-11-25 12:14:21.620366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.979 [2024-11-25 12:14:21.822481] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:25.979 [2024-11-25 12:14:21.822737] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:26.237 12:14:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:26.237 12:14:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:15:26.237 12:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:26.237 12:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:26.237 12:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:26.237 12:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:26.237 12:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:26.237 12:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:26.237 12:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:26.237 12:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:26.237 12:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:26.237 12:14:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.237 12:14:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.237 malloc1 00:15:26.237 12:14:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.237 12:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:26.237 12:14:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.237 12:14:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.238 [2024-11-25 12:14:22.318734] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:26.238 [2024-11-25 12:14:22.318944] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:26.238 [2024-11-25 12:14:22.319024] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:26.238 [2024-11-25 12:14:22.319150] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:26.238 [2024-11-25 12:14:22.321946] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:26.238 [2024-11-25 12:14:22.322118] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:26.238 pt1 00:15:26.498 12:14:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.498 12:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:26.498 12:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:26.498 12:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:26.498 12:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:26.498 12:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:26.498 12:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:26.498 12:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:26.498 12:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:26.498 12:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:26.498 12:14:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.498 12:14:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.498 malloc2 00:15:26.498 12:14:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.498 12:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:26.498 12:14:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.498 12:14:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.498 [2024-11-25 12:14:22.374649] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:26.498 [2024-11-25 12:14:22.374849] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:26.498 [2024-11-25 12:14:22.374926] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:26.498 [2024-11-25 12:14:22.375049] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:26.498 [2024-11-25 12:14:22.377947] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:26.498 [2024-11-25 12:14:22.378114] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:26.498 pt2 00:15:26.498 12:14:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.498 12:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:26.498 12:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:26.498 12:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:26.498 12:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:26.498 12:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:26.498 12:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:26.498 12:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:26.498 12:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:26.498 12:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:26.498 12:14:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.498 12:14:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.498 malloc3 00:15:26.498 12:14:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.498 12:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:26.498 12:14:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.498 12:14:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.498 [2024-11-25 12:14:22.443625] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:26.498 [2024-11-25 12:14:22.443811] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:26.498 [2024-11-25 12:14:22.443895] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:26.498 [2024-11-25 12:14:22.444005] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:26.498 [2024-11-25 12:14:22.446861] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:26.498 [2024-11-25 12:14:22.447017] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:26.498 pt3 00:15:26.498 12:14:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.498 12:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:26.498 12:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:26.498 12:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:15:26.498 12:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:15:26.498 12:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:15:26.498 12:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:26.498 12:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:26.498 12:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:26.498 12:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:15:26.498 12:14:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.498 12:14:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.498 malloc4 00:15:26.498 12:14:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.498 12:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:26.498 12:14:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.498 12:14:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.498 [2024-11-25 12:14:22.499437] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:26.498 [2024-11-25 12:14:22.499625] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:26.498 [2024-11-25 12:14:22.499703] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:26.498 [2024-11-25 12:14:22.499812] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:26.498 [2024-11-25 12:14:22.502816] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:26.498 [2024-11-25 12:14:22.502973] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:26.498 pt4 00:15:26.498 12:14:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.498 12:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:26.498 12:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:26.498 12:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:15:26.498 12:14:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.498 12:14:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.498 [2024-11-25 12:14:22.511446] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:26.498 [2024-11-25 12:14:22.513836] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:26.498 [2024-11-25 12:14:22.513935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:26.498 [2024-11-25 12:14:22.514046] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:26.498 [2024-11-25 12:14:22.514303] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:26.498 [2024-11-25 12:14:22.514322] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:26.498 [2024-11-25 12:14:22.514661] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:26.498 [2024-11-25 12:14:22.514885] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:26.498 [2024-11-25 12:14:22.514906] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:26.498 [2024-11-25 12:14:22.515083] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:26.498 12:14:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.498 12:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:15:26.499 12:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:26.499 12:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:26.499 12:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:26.499 12:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:26.499 12:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:26.499 12:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.499 12:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.499 12:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.499 12:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.499 12:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.499 12:14:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.499 12:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.499 12:14:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.499 12:14:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.758 12:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.758 "name": "raid_bdev1", 00:15:26.758 "uuid": "583317cd-c352-4ea7-85a0-31a8b6a2bd33", 00:15:26.758 "strip_size_kb": 64, 00:15:26.758 "state": "online", 00:15:26.758 "raid_level": "raid0", 00:15:26.758 "superblock": true, 00:15:26.758 "num_base_bdevs": 4, 00:15:26.758 "num_base_bdevs_discovered": 4, 00:15:26.758 "num_base_bdevs_operational": 4, 00:15:26.758 "base_bdevs_list": [ 00:15:26.758 { 00:15:26.758 "name": "pt1", 00:15:26.758 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:26.758 "is_configured": true, 00:15:26.758 "data_offset": 2048, 00:15:26.758 "data_size": 63488 00:15:26.758 }, 00:15:26.758 { 00:15:26.758 "name": "pt2", 00:15:26.758 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:26.758 "is_configured": true, 00:15:26.758 "data_offset": 2048, 00:15:26.758 "data_size": 63488 00:15:26.758 }, 00:15:26.758 { 00:15:26.758 "name": "pt3", 00:15:26.758 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:26.758 "is_configured": true, 00:15:26.758 "data_offset": 2048, 00:15:26.758 "data_size": 63488 00:15:26.758 }, 00:15:26.758 { 00:15:26.758 "name": "pt4", 00:15:26.758 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:26.758 "is_configured": true, 00:15:26.758 "data_offset": 2048, 00:15:26.758 "data_size": 63488 00:15:26.758 } 00:15:26.758 ] 00:15:26.758 }' 00:15:26.758 12:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.758 12:14:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.016 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:27.016 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:27.016 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:27.016 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:27.016 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:27.016 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:27.016 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:27.016 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:27.016 12:14:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.016 12:14:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.016 [2024-11-25 12:14:23.048019] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:27.016 12:14:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.016 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:27.016 "name": "raid_bdev1", 00:15:27.016 "aliases": [ 00:15:27.016 "583317cd-c352-4ea7-85a0-31a8b6a2bd33" 00:15:27.016 ], 00:15:27.016 "product_name": "Raid Volume", 00:15:27.017 "block_size": 512, 00:15:27.017 "num_blocks": 253952, 00:15:27.017 "uuid": "583317cd-c352-4ea7-85a0-31a8b6a2bd33", 00:15:27.017 "assigned_rate_limits": { 00:15:27.017 "rw_ios_per_sec": 0, 00:15:27.017 "rw_mbytes_per_sec": 0, 00:15:27.017 "r_mbytes_per_sec": 0, 00:15:27.017 "w_mbytes_per_sec": 0 00:15:27.017 }, 00:15:27.017 "claimed": false, 00:15:27.017 "zoned": false, 00:15:27.017 "supported_io_types": { 00:15:27.017 "read": true, 00:15:27.017 "write": true, 00:15:27.017 "unmap": true, 00:15:27.017 "flush": true, 00:15:27.017 "reset": true, 00:15:27.017 "nvme_admin": false, 00:15:27.017 "nvme_io": false, 00:15:27.017 "nvme_io_md": false, 00:15:27.017 "write_zeroes": true, 00:15:27.017 "zcopy": false, 00:15:27.017 "get_zone_info": false, 00:15:27.017 "zone_management": false, 00:15:27.017 "zone_append": false, 00:15:27.017 "compare": false, 00:15:27.017 "compare_and_write": false, 00:15:27.017 "abort": false, 00:15:27.017 "seek_hole": false, 00:15:27.017 "seek_data": false, 00:15:27.017 "copy": false, 00:15:27.017 "nvme_iov_md": false 00:15:27.017 }, 00:15:27.017 "memory_domains": [ 00:15:27.017 { 00:15:27.017 "dma_device_id": "system", 00:15:27.017 "dma_device_type": 1 00:15:27.017 }, 00:15:27.017 { 00:15:27.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.017 "dma_device_type": 2 00:15:27.017 }, 00:15:27.017 { 00:15:27.017 "dma_device_id": "system", 00:15:27.017 "dma_device_type": 1 00:15:27.017 }, 00:15:27.017 { 00:15:27.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.017 "dma_device_type": 2 00:15:27.017 }, 00:15:27.017 { 00:15:27.017 "dma_device_id": "system", 00:15:27.017 "dma_device_type": 1 00:15:27.017 }, 00:15:27.017 { 00:15:27.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.017 "dma_device_type": 2 00:15:27.017 }, 00:15:27.017 { 00:15:27.017 "dma_device_id": "system", 00:15:27.017 "dma_device_type": 1 00:15:27.017 }, 00:15:27.017 { 00:15:27.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.017 "dma_device_type": 2 00:15:27.017 } 00:15:27.017 ], 00:15:27.017 "driver_specific": { 00:15:27.017 "raid": { 00:15:27.017 "uuid": "583317cd-c352-4ea7-85a0-31a8b6a2bd33", 00:15:27.017 "strip_size_kb": 64, 00:15:27.017 "state": "online", 00:15:27.017 "raid_level": "raid0", 00:15:27.017 "superblock": true, 00:15:27.017 "num_base_bdevs": 4, 00:15:27.017 "num_base_bdevs_discovered": 4, 00:15:27.017 "num_base_bdevs_operational": 4, 00:15:27.017 "base_bdevs_list": [ 00:15:27.017 { 00:15:27.017 "name": "pt1", 00:15:27.017 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:27.017 "is_configured": true, 00:15:27.017 "data_offset": 2048, 00:15:27.017 "data_size": 63488 00:15:27.017 }, 00:15:27.017 { 00:15:27.017 "name": "pt2", 00:15:27.017 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:27.017 "is_configured": true, 00:15:27.017 "data_offset": 2048, 00:15:27.017 "data_size": 63488 00:15:27.017 }, 00:15:27.017 { 00:15:27.017 "name": "pt3", 00:15:27.017 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:27.017 "is_configured": true, 00:15:27.017 "data_offset": 2048, 00:15:27.017 "data_size": 63488 00:15:27.017 }, 00:15:27.017 { 00:15:27.017 "name": "pt4", 00:15:27.017 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:27.017 "is_configured": true, 00:15:27.017 "data_offset": 2048, 00:15:27.017 "data_size": 63488 00:15:27.017 } 00:15:27.017 ] 00:15:27.017 } 00:15:27.017 } 00:15:27.017 }' 00:15:27.017 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:27.276 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:27.276 pt2 00:15:27.276 pt3 00:15:27.276 pt4' 00:15:27.276 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:27.276 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:27.276 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:27.276 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:27.276 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:27.276 12:14:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.276 12:14:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.276 12:14:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.276 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:27.276 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:27.276 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:27.276 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:27.276 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:27.276 12:14:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.276 12:14:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.276 12:14:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.276 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:27.276 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:27.276 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:27.276 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:27.276 12:14:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.276 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:27.276 12:14:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.276 12:14:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.276 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:27.276 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:27.276 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:27.276 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:27.276 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:27.276 12:14:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.276 12:14:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.535 12:14:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.535 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:27.535 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:27.535 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:27.535 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:27.535 12:14:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.535 12:14:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.535 [2024-11-25 12:14:23.404041] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:27.535 12:14:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.535 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=583317cd-c352-4ea7-85a0-31a8b6a2bd33 00:15:27.535 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 583317cd-c352-4ea7-85a0-31a8b6a2bd33 ']' 00:15:27.535 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:27.535 12:14:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.535 12:14:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.535 [2024-11-25 12:14:23.439700] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:27.535 [2024-11-25 12:14:23.439845] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:27.535 [2024-11-25 12:14:23.440051] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:27.535 [2024-11-25 12:14:23.440242] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:27.535 [2024-11-25 12:14:23.440410] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:27.535 12:14:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.535 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.535 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:27.535 12:14:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.535 12:14:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.535 12:14:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.535 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:27.535 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:27.535 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:27.535 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:27.535 12:14:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.535 12:14:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.535 12:14:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.535 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:27.535 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:27.535 12:14:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.535 12:14:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.535 12:14:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.535 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:27.535 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:27.535 12:14:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.535 12:14:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.535 12:14:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.535 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:27.535 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:15:27.535 12:14:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.535 12:14:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.535 12:14:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.535 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:27.535 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:27.535 12:14:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.535 12:14:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.535 12:14:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.535 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:27.535 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:27.535 12:14:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:15:27.535 12:14:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:27.535 12:14:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:27.535 12:14:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:27.535 12:14:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:27.535 12:14:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:27.535 12:14:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:27.535 12:14:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.535 12:14:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.535 [2024-11-25 12:14:23.587744] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:27.535 [2024-11-25 12:14:23.590164] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:27.535 [2024-11-25 12:14:23.590234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:27.535 [2024-11-25 12:14:23.590292] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:15:27.535 [2024-11-25 12:14:23.590400] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:27.535 [2024-11-25 12:14:23.590472] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:27.535 [2024-11-25 12:14:23.590506] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:27.535 [2024-11-25 12:14:23.590536] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:15:27.535 [2024-11-25 12:14:23.590558] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:27.535 [2024-11-25 12:14:23.590578] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:27.535 request: 00:15:27.535 { 00:15:27.535 "name": "raid_bdev1", 00:15:27.535 "raid_level": "raid0", 00:15:27.535 "base_bdevs": [ 00:15:27.535 "malloc1", 00:15:27.535 "malloc2", 00:15:27.535 "malloc3", 00:15:27.535 "malloc4" 00:15:27.535 ], 00:15:27.535 "strip_size_kb": 64, 00:15:27.535 "superblock": false, 00:15:27.535 "method": "bdev_raid_create", 00:15:27.535 "req_id": 1 00:15:27.535 } 00:15:27.536 Got JSON-RPC error response 00:15:27.536 response: 00:15:27.536 { 00:15:27.536 "code": -17, 00:15:27.536 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:27.536 } 00:15:27.536 12:14:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:27.536 12:14:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:15:27.536 12:14:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:27.536 12:14:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:27.536 12:14:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:27.536 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.536 12:14:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.536 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:27.536 12:14:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.536 12:14:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.794 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:27.794 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:27.794 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:27.794 12:14:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.794 12:14:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.794 [2024-11-25 12:14:23.651750] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:27.794 [2024-11-25 12:14:23.651829] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:27.794 [2024-11-25 12:14:23.651858] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:27.794 [2024-11-25 12:14:23.651877] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:27.794 [2024-11-25 12:14:23.654700] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:27.794 [2024-11-25 12:14:23.654753] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:27.794 [2024-11-25 12:14:23.654855] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:27.794 [2024-11-25 12:14:23.654962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:27.794 pt1 00:15:27.794 12:14:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.794 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:15:27.794 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:27.794 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:27.794 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:27.794 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:27.794 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:27.794 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.794 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.794 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.794 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.794 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.794 12:14:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.794 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.794 12:14:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.794 12:14:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.794 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.794 "name": "raid_bdev1", 00:15:27.794 "uuid": "583317cd-c352-4ea7-85a0-31a8b6a2bd33", 00:15:27.794 "strip_size_kb": 64, 00:15:27.794 "state": "configuring", 00:15:27.794 "raid_level": "raid0", 00:15:27.794 "superblock": true, 00:15:27.794 "num_base_bdevs": 4, 00:15:27.794 "num_base_bdevs_discovered": 1, 00:15:27.794 "num_base_bdevs_operational": 4, 00:15:27.794 "base_bdevs_list": [ 00:15:27.794 { 00:15:27.794 "name": "pt1", 00:15:27.794 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:27.794 "is_configured": true, 00:15:27.794 "data_offset": 2048, 00:15:27.794 "data_size": 63488 00:15:27.794 }, 00:15:27.794 { 00:15:27.794 "name": null, 00:15:27.794 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:27.794 "is_configured": false, 00:15:27.794 "data_offset": 2048, 00:15:27.794 "data_size": 63488 00:15:27.794 }, 00:15:27.794 { 00:15:27.794 "name": null, 00:15:27.794 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:27.794 "is_configured": false, 00:15:27.794 "data_offset": 2048, 00:15:27.795 "data_size": 63488 00:15:27.795 }, 00:15:27.795 { 00:15:27.795 "name": null, 00:15:27.795 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:27.795 "is_configured": false, 00:15:27.795 "data_offset": 2048, 00:15:27.795 "data_size": 63488 00:15:27.795 } 00:15:27.795 ] 00:15:27.795 }' 00:15:27.795 12:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.795 12:14:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.053 12:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:15:28.053 12:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:28.053 12:14:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.053 12:14:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.053 [2024-11-25 12:14:24.136113] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:28.053 [2024-11-25 12:14:24.136206] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:28.053 [2024-11-25 12:14:24.136237] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:28.053 [2024-11-25 12:14:24.136256] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:28.053 [2024-11-25 12:14:24.136832] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:28.053 [2024-11-25 12:14:24.136871] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:28.053 [2024-11-25 12:14:24.136974] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:28.053 [2024-11-25 12:14:24.137012] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:28.053 pt2 00:15:28.053 12:14:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.053 12:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:28.053 12:14:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.053 12:14:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.311 [2024-11-25 12:14:24.144096] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:28.311 12:14:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.311 12:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:15:28.311 12:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:28.311 12:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:28.311 12:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:28.311 12:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:28.311 12:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:28.311 12:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.311 12:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.311 12:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.311 12:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.311 12:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.311 12:14:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.311 12:14:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.311 12:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.311 12:14:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.311 12:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.311 "name": "raid_bdev1", 00:15:28.311 "uuid": "583317cd-c352-4ea7-85a0-31a8b6a2bd33", 00:15:28.311 "strip_size_kb": 64, 00:15:28.311 "state": "configuring", 00:15:28.311 "raid_level": "raid0", 00:15:28.311 "superblock": true, 00:15:28.311 "num_base_bdevs": 4, 00:15:28.311 "num_base_bdevs_discovered": 1, 00:15:28.311 "num_base_bdevs_operational": 4, 00:15:28.311 "base_bdevs_list": [ 00:15:28.311 { 00:15:28.311 "name": "pt1", 00:15:28.311 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:28.311 "is_configured": true, 00:15:28.311 "data_offset": 2048, 00:15:28.311 "data_size": 63488 00:15:28.311 }, 00:15:28.311 { 00:15:28.311 "name": null, 00:15:28.311 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:28.311 "is_configured": false, 00:15:28.311 "data_offset": 0, 00:15:28.311 "data_size": 63488 00:15:28.311 }, 00:15:28.311 { 00:15:28.311 "name": null, 00:15:28.311 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:28.311 "is_configured": false, 00:15:28.311 "data_offset": 2048, 00:15:28.311 "data_size": 63488 00:15:28.311 }, 00:15:28.311 { 00:15:28.311 "name": null, 00:15:28.311 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:28.311 "is_configured": false, 00:15:28.311 "data_offset": 2048, 00:15:28.311 "data_size": 63488 00:15:28.311 } 00:15:28.311 ] 00:15:28.311 }' 00:15:28.311 12:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.311 12:14:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.569 12:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:28.569 12:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:28.569 12:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:28.569 12:14:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.569 12:14:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.569 [2024-11-25 12:14:24.644322] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:28.569 [2024-11-25 12:14:24.644474] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:28.569 [2024-11-25 12:14:24.644514] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:28.569 [2024-11-25 12:14:24.644532] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:28.569 [2024-11-25 12:14:24.645213] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:28.569 [2024-11-25 12:14:24.645258] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:28.569 [2024-11-25 12:14:24.645404] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:28.569 [2024-11-25 12:14:24.645456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:28.569 pt2 00:15:28.569 12:14:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.569 12:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:28.570 12:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:28.570 12:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:28.570 12:14:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.570 12:14:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.570 [2024-11-25 12:14:24.652219] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:28.570 [2024-11-25 12:14:24.652591] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:28.570 [2024-11-25 12:14:24.652670] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:28.570 [2024-11-25 12:14:24.652856] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:28.570 [2024-11-25 12:14:24.653399] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:28.570 [2024-11-25 12:14:24.653564] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:28.570 [2024-11-25 12:14:24.653673] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:28.570 [2024-11-25 12:14:24.653705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:28.570 pt3 00:15:28.570 12:14:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.570 12:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:28.570 12:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:28.570 12:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:28.570 12:14:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.570 12:14:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.829 [2024-11-25 12:14:24.660200] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:28.829 [2024-11-25 12:14:24.660396] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:28.829 [2024-11-25 12:14:24.660612] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:15:28.829 [2024-11-25 12:14:24.660751] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:28.829 [2024-11-25 12:14:24.661402] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:28.829 [2024-11-25 12:14:24.661547] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:28.829 [2024-11-25 12:14:24.661737] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:28.829 [2024-11-25 12:14:24.661777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:28.829 [2024-11-25 12:14:24.661959] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:28.829 [2024-11-25 12:14:24.661977] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:28.829 [2024-11-25 12:14:24.662320] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:28.829 [2024-11-25 12:14:24.662551] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:28.830 [2024-11-25 12:14:24.662576] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:28.830 [2024-11-25 12:14:24.662741] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:28.830 pt4 00:15:28.830 12:14:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.830 12:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:28.830 12:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:28.830 12:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:15:28.830 12:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:28.830 12:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:28.830 12:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:28.830 12:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:28.830 12:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:28.830 12:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.830 12:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.830 12:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.830 12:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.830 12:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.830 12:14:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.830 12:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.830 12:14:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.830 12:14:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.830 12:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.830 "name": "raid_bdev1", 00:15:28.830 "uuid": "583317cd-c352-4ea7-85a0-31a8b6a2bd33", 00:15:28.830 "strip_size_kb": 64, 00:15:28.830 "state": "online", 00:15:28.830 "raid_level": "raid0", 00:15:28.830 "superblock": true, 00:15:28.830 "num_base_bdevs": 4, 00:15:28.830 "num_base_bdevs_discovered": 4, 00:15:28.830 "num_base_bdevs_operational": 4, 00:15:28.830 "base_bdevs_list": [ 00:15:28.830 { 00:15:28.830 "name": "pt1", 00:15:28.830 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:28.830 "is_configured": true, 00:15:28.830 "data_offset": 2048, 00:15:28.830 "data_size": 63488 00:15:28.830 }, 00:15:28.830 { 00:15:28.830 "name": "pt2", 00:15:28.830 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:28.830 "is_configured": true, 00:15:28.830 "data_offset": 2048, 00:15:28.830 "data_size": 63488 00:15:28.830 }, 00:15:28.830 { 00:15:28.830 "name": "pt3", 00:15:28.830 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:28.830 "is_configured": true, 00:15:28.830 "data_offset": 2048, 00:15:28.830 "data_size": 63488 00:15:28.830 }, 00:15:28.830 { 00:15:28.830 "name": "pt4", 00:15:28.830 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:28.830 "is_configured": true, 00:15:28.830 "data_offset": 2048, 00:15:28.830 "data_size": 63488 00:15:28.830 } 00:15:28.830 ] 00:15:28.830 }' 00:15:28.830 12:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.830 12:14:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.089 12:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:29.089 12:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:29.089 12:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:29.089 12:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:29.089 12:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:29.089 12:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:29.089 12:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:29.089 12:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:29.089 12:14:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.089 12:14:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.089 [2024-11-25 12:14:25.164931] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:29.347 12:14:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.347 12:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:29.347 "name": "raid_bdev1", 00:15:29.347 "aliases": [ 00:15:29.347 "583317cd-c352-4ea7-85a0-31a8b6a2bd33" 00:15:29.347 ], 00:15:29.347 "product_name": "Raid Volume", 00:15:29.347 "block_size": 512, 00:15:29.347 "num_blocks": 253952, 00:15:29.347 "uuid": "583317cd-c352-4ea7-85a0-31a8b6a2bd33", 00:15:29.347 "assigned_rate_limits": { 00:15:29.347 "rw_ios_per_sec": 0, 00:15:29.347 "rw_mbytes_per_sec": 0, 00:15:29.347 "r_mbytes_per_sec": 0, 00:15:29.347 "w_mbytes_per_sec": 0 00:15:29.347 }, 00:15:29.348 "claimed": false, 00:15:29.348 "zoned": false, 00:15:29.348 "supported_io_types": { 00:15:29.348 "read": true, 00:15:29.348 "write": true, 00:15:29.348 "unmap": true, 00:15:29.348 "flush": true, 00:15:29.348 "reset": true, 00:15:29.348 "nvme_admin": false, 00:15:29.348 "nvme_io": false, 00:15:29.348 "nvme_io_md": false, 00:15:29.348 "write_zeroes": true, 00:15:29.348 "zcopy": false, 00:15:29.348 "get_zone_info": false, 00:15:29.348 "zone_management": false, 00:15:29.348 "zone_append": false, 00:15:29.348 "compare": false, 00:15:29.348 "compare_and_write": false, 00:15:29.348 "abort": false, 00:15:29.348 "seek_hole": false, 00:15:29.348 "seek_data": false, 00:15:29.348 "copy": false, 00:15:29.348 "nvme_iov_md": false 00:15:29.348 }, 00:15:29.348 "memory_domains": [ 00:15:29.348 { 00:15:29.348 "dma_device_id": "system", 00:15:29.348 "dma_device_type": 1 00:15:29.348 }, 00:15:29.348 { 00:15:29.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:29.348 "dma_device_type": 2 00:15:29.348 }, 00:15:29.348 { 00:15:29.348 "dma_device_id": "system", 00:15:29.348 "dma_device_type": 1 00:15:29.348 }, 00:15:29.348 { 00:15:29.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:29.348 "dma_device_type": 2 00:15:29.348 }, 00:15:29.348 { 00:15:29.348 "dma_device_id": "system", 00:15:29.348 "dma_device_type": 1 00:15:29.348 }, 00:15:29.348 { 00:15:29.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:29.348 "dma_device_type": 2 00:15:29.348 }, 00:15:29.348 { 00:15:29.348 "dma_device_id": "system", 00:15:29.348 "dma_device_type": 1 00:15:29.348 }, 00:15:29.348 { 00:15:29.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:29.348 "dma_device_type": 2 00:15:29.348 } 00:15:29.348 ], 00:15:29.348 "driver_specific": { 00:15:29.348 "raid": { 00:15:29.348 "uuid": "583317cd-c352-4ea7-85a0-31a8b6a2bd33", 00:15:29.348 "strip_size_kb": 64, 00:15:29.348 "state": "online", 00:15:29.348 "raid_level": "raid0", 00:15:29.348 "superblock": true, 00:15:29.348 "num_base_bdevs": 4, 00:15:29.348 "num_base_bdevs_discovered": 4, 00:15:29.348 "num_base_bdevs_operational": 4, 00:15:29.348 "base_bdevs_list": [ 00:15:29.348 { 00:15:29.348 "name": "pt1", 00:15:29.348 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:29.348 "is_configured": true, 00:15:29.348 "data_offset": 2048, 00:15:29.348 "data_size": 63488 00:15:29.348 }, 00:15:29.348 { 00:15:29.348 "name": "pt2", 00:15:29.348 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:29.348 "is_configured": true, 00:15:29.348 "data_offset": 2048, 00:15:29.348 "data_size": 63488 00:15:29.348 }, 00:15:29.348 { 00:15:29.348 "name": "pt3", 00:15:29.348 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:29.348 "is_configured": true, 00:15:29.348 "data_offset": 2048, 00:15:29.348 "data_size": 63488 00:15:29.348 }, 00:15:29.348 { 00:15:29.348 "name": "pt4", 00:15:29.348 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:29.348 "is_configured": true, 00:15:29.348 "data_offset": 2048, 00:15:29.348 "data_size": 63488 00:15:29.348 } 00:15:29.348 ] 00:15:29.348 } 00:15:29.348 } 00:15:29.348 }' 00:15:29.348 12:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:29.348 12:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:29.348 pt2 00:15:29.348 pt3 00:15:29.348 pt4' 00:15:29.348 12:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:29.348 12:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:29.348 12:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:29.348 12:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:29.348 12:14:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.348 12:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:29.348 12:14:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.348 12:14:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.348 12:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:29.348 12:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:29.348 12:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:29.348 12:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:29.348 12:14:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.348 12:14:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.348 12:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:29.348 12:14:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.348 12:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:29.348 12:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:29.348 12:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:29.348 12:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:29.348 12:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:29.348 12:14:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.348 12:14:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.607 12:14:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.607 12:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:29.607 12:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:29.607 12:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:29.607 12:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:29.607 12:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:29.607 12:14:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.607 12:14:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.607 12:14:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.607 12:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:29.607 12:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:29.607 12:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:29.607 12:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:29.607 12:14:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.607 12:14:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.607 [2024-11-25 12:14:25.524960] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:29.607 12:14:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.607 12:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 583317cd-c352-4ea7-85a0-31a8b6a2bd33 '!=' 583317cd-c352-4ea7-85a0-31a8b6a2bd33 ']' 00:15:29.607 12:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:15:29.607 12:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:29.607 12:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:15:29.607 12:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70837 00:15:29.607 12:14:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 70837 ']' 00:15:29.607 12:14:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 70837 00:15:29.607 12:14:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:15:29.607 12:14:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:29.607 12:14:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70837 00:15:29.607 killing process with pid 70837 00:15:29.607 12:14:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:29.607 12:14:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:29.607 12:14:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70837' 00:15:29.607 12:14:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 70837 00:15:29.607 [2024-11-25 12:14:25.600541] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:29.607 [2024-11-25 12:14:25.600706] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:29.607 [2024-11-25 12:14:25.600823] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:29.607 [2024-11-25 12:14:25.600842] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:29.607 12:14:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 70837 00:15:30.174 [2024-11-25 12:14:25.990431] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:31.109 ************************************ 00:15:31.109 END TEST raid_superblock_test 00:15:31.109 ************************************ 00:15:31.109 12:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:31.109 00:15:31.109 real 0m5.857s 00:15:31.109 user 0m8.687s 00:15:31.109 sys 0m0.864s 00:15:31.109 12:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:31.109 12:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.109 12:14:27 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:15:31.109 12:14:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:31.109 12:14:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:31.109 12:14:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:31.109 ************************************ 00:15:31.109 START TEST raid_read_error_test 00:15:31.109 ************************************ 00:15:31.109 12:14:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:15:31.109 12:14:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:15:31.109 12:14:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:15:31.109 12:14:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:15:31.109 12:14:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:15:31.109 12:14:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:31.109 12:14:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:15:31.109 12:14:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:31.109 12:14:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:31.109 12:14:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:15:31.109 12:14:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:31.109 12:14:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:31.109 12:14:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:15:31.109 12:14:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:31.109 12:14:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:31.109 12:14:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:15:31.109 12:14:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:31.109 12:14:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:31.109 12:14:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:31.109 12:14:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:15:31.109 12:14:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:15:31.109 12:14:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:15:31.109 12:14:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:15:31.109 12:14:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:15:31.109 12:14:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:15:31.110 12:14:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:15:31.110 12:14:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:15:31.110 12:14:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:15:31.110 12:14:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:15:31.110 12:14:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.m5icKDQXpm 00:15:31.110 12:14:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71102 00:15:31.110 12:14:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71102 00:15:31.110 12:14:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 71102 ']' 00:15:31.110 12:14:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:31.110 12:14:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:31.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:31.110 12:14:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:31.110 12:14:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:31.110 12:14:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:31.110 12:14:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.368 [2024-11-25 12:14:27.269424] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:15:31.368 [2024-11-25 12:14:27.269679] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71102 ] 00:15:31.628 [2024-11-25 12:14:27.471853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:31.628 [2024-11-25 12:14:27.619852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.886 [2024-11-25 12:14:27.845035] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:31.886 [2024-11-25 12:14:27.845144] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:32.455 12:14:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:32.455 12:14:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:15:32.455 12:14:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:32.455 12:14:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:32.455 12:14:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.455 12:14:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.455 BaseBdev1_malloc 00:15:32.455 12:14:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.455 12:14:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:15:32.455 12:14:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.455 12:14:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.455 true 00:15:32.455 12:14:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.455 12:14:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:32.455 12:14:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.455 12:14:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.455 [2024-11-25 12:14:28.319449] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:32.455 [2024-11-25 12:14:28.319556] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:32.455 [2024-11-25 12:14:28.319592] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:32.455 [2024-11-25 12:14:28.319612] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:32.455 [2024-11-25 12:14:28.322645] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:32.455 BaseBdev1 00:15:32.455 [2024-11-25 12:14:28.322949] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:32.455 12:14:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.455 12:14:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:32.455 12:14:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:32.455 12:14:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.455 12:14:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.455 BaseBdev2_malloc 00:15:32.455 12:14:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.455 12:14:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:15:32.455 12:14:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.455 12:14:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.455 true 00:15:32.455 12:14:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.455 12:14:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:32.455 12:14:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.455 12:14:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.455 [2024-11-25 12:14:28.384024] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:32.455 [2024-11-25 12:14:28.384145] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:32.455 [2024-11-25 12:14:28.384182] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:32.455 [2024-11-25 12:14:28.384201] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:32.455 [2024-11-25 12:14:28.387418] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:32.455 [2024-11-25 12:14:28.387481] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:32.455 BaseBdev2 00:15:32.455 12:14:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.455 12:14:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:32.455 12:14:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:32.455 12:14:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.455 12:14:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.455 BaseBdev3_malloc 00:15:32.455 12:14:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.455 12:14:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:15:32.455 12:14:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.455 12:14:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.455 true 00:15:32.455 12:14:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.455 12:14:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:32.455 12:14:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.455 12:14:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.455 [2024-11-25 12:14:28.462289] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:32.455 [2024-11-25 12:14:28.462403] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:32.455 [2024-11-25 12:14:28.462439] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:32.455 [2024-11-25 12:14:28.462458] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:32.455 [2024-11-25 12:14:28.465491] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:32.455 BaseBdev3 00:15:32.455 [2024-11-25 12:14:28.465766] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:32.455 12:14:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.455 12:14:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:32.455 12:14:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:32.455 12:14:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.455 12:14:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.455 BaseBdev4_malloc 00:15:32.455 12:14:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.455 12:14:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:15:32.455 12:14:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.455 12:14:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.455 true 00:15:32.455 12:14:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.455 12:14:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:15:32.455 12:14:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.455 12:14:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.455 [2024-11-25 12:14:28.530491] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:15:32.455 [2024-11-25 12:14:28.530886] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:32.455 [2024-11-25 12:14:28.530937] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:32.455 [2024-11-25 12:14:28.530958] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:32.455 BaseBdev4 00:15:32.455 [2024-11-25 12:14:28.534178] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:32.455 [2024-11-25 12:14:28.534234] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:32.455 12:14:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.455 12:14:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:15:32.455 12:14:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.455 12:14:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.455 [2024-11-25 12:14:28.538551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:32.455 [2024-11-25 12:14:28.541426] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:32.455 [2024-11-25 12:14:28.541678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:32.455 [2024-11-25 12:14:28.541834] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:32.455 [2024-11-25 12:14:28.542217] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:15:32.455 [2024-11-25 12:14:28.542287] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:32.455 [2024-11-25 12:14:28.542769] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:15:32.455 [2024-11-25 12:14:28.543133] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:15:32.455 [2024-11-25 12:14:28.543260] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:15:32.455 [2024-11-25 12:14:28.543768] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:32.714 12:14:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.714 12:14:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:15:32.714 12:14:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:32.714 12:14:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:32.714 12:14:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:32.715 12:14:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:32.715 12:14:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:32.715 12:14:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.715 12:14:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.715 12:14:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.715 12:14:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.715 12:14:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.715 12:14:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.715 12:14:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.715 12:14:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.715 12:14:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.715 12:14:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.715 "name": "raid_bdev1", 00:15:32.715 "uuid": "76cb28ba-655b-4ffb-84e4-5f3dc3853a20", 00:15:32.715 "strip_size_kb": 64, 00:15:32.715 "state": "online", 00:15:32.715 "raid_level": "raid0", 00:15:32.715 "superblock": true, 00:15:32.715 "num_base_bdevs": 4, 00:15:32.715 "num_base_bdevs_discovered": 4, 00:15:32.715 "num_base_bdevs_operational": 4, 00:15:32.715 "base_bdevs_list": [ 00:15:32.715 { 00:15:32.715 "name": "BaseBdev1", 00:15:32.715 "uuid": "d70c5a71-998b-5729-a5f5-530aaffec1c1", 00:15:32.715 "is_configured": true, 00:15:32.715 "data_offset": 2048, 00:15:32.715 "data_size": 63488 00:15:32.715 }, 00:15:32.715 { 00:15:32.715 "name": "BaseBdev2", 00:15:32.715 "uuid": "d150deb3-cd16-59cd-a3da-c24544dd389a", 00:15:32.715 "is_configured": true, 00:15:32.715 "data_offset": 2048, 00:15:32.715 "data_size": 63488 00:15:32.715 }, 00:15:32.715 { 00:15:32.715 "name": "BaseBdev3", 00:15:32.715 "uuid": "d4fcf115-c1c8-54a4-9f63-cc9423f3650b", 00:15:32.715 "is_configured": true, 00:15:32.715 "data_offset": 2048, 00:15:32.715 "data_size": 63488 00:15:32.715 }, 00:15:32.715 { 00:15:32.715 "name": "BaseBdev4", 00:15:32.715 "uuid": "6c826ba7-f222-53e9-9392-ecd86b65896c", 00:15:32.715 "is_configured": true, 00:15:32.715 "data_offset": 2048, 00:15:32.715 "data_size": 63488 00:15:32.715 } 00:15:32.715 ] 00:15:32.715 }' 00:15:32.715 12:14:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.715 12:14:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.974 12:14:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:15:32.974 12:14:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:33.253 [2024-11-25 12:14:29.161439] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:15:34.189 12:14:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:15:34.189 12:14:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.189 12:14:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.189 12:14:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.189 12:14:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:15:34.189 12:14:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:15:34.189 12:14:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:15:34.189 12:14:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:15:34.189 12:14:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:34.189 12:14:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:34.189 12:14:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:34.189 12:14:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:34.189 12:14:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:34.189 12:14:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.189 12:14:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.189 12:14:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.189 12:14:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.189 12:14:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.189 12:14:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.189 12:14:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.189 12:14:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.189 12:14:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.189 12:14:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.189 "name": "raid_bdev1", 00:15:34.189 "uuid": "76cb28ba-655b-4ffb-84e4-5f3dc3853a20", 00:15:34.189 "strip_size_kb": 64, 00:15:34.189 "state": "online", 00:15:34.189 "raid_level": "raid0", 00:15:34.189 "superblock": true, 00:15:34.189 "num_base_bdevs": 4, 00:15:34.189 "num_base_bdevs_discovered": 4, 00:15:34.189 "num_base_bdevs_operational": 4, 00:15:34.189 "base_bdevs_list": [ 00:15:34.189 { 00:15:34.189 "name": "BaseBdev1", 00:15:34.189 "uuid": "d70c5a71-998b-5729-a5f5-530aaffec1c1", 00:15:34.189 "is_configured": true, 00:15:34.189 "data_offset": 2048, 00:15:34.189 "data_size": 63488 00:15:34.189 }, 00:15:34.189 { 00:15:34.189 "name": "BaseBdev2", 00:15:34.189 "uuid": "d150deb3-cd16-59cd-a3da-c24544dd389a", 00:15:34.190 "is_configured": true, 00:15:34.190 "data_offset": 2048, 00:15:34.190 "data_size": 63488 00:15:34.190 }, 00:15:34.190 { 00:15:34.190 "name": "BaseBdev3", 00:15:34.190 "uuid": "d4fcf115-c1c8-54a4-9f63-cc9423f3650b", 00:15:34.190 "is_configured": true, 00:15:34.190 "data_offset": 2048, 00:15:34.190 "data_size": 63488 00:15:34.190 }, 00:15:34.190 { 00:15:34.190 "name": "BaseBdev4", 00:15:34.190 "uuid": "6c826ba7-f222-53e9-9392-ecd86b65896c", 00:15:34.190 "is_configured": true, 00:15:34.190 "data_offset": 2048, 00:15:34.190 "data_size": 63488 00:15:34.190 } 00:15:34.190 ] 00:15:34.190 }' 00:15:34.190 12:14:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.190 12:14:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.756 12:14:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:34.756 12:14:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.756 12:14:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.756 [2024-11-25 12:14:30.547198] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:34.756 [2024-11-25 12:14:30.547602] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:34.756 [2024-11-25 12:14:30.550985] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:34.756 { 00:15:34.756 "results": [ 00:15:34.756 { 00:15:34.756 "job": "raid_bdev1", 00:15:34.756 "core_mask": "0x1", 00:15:34.756 "workload": "randrw", 00:15:34.756 "percentage": 50, 00:15:34.756 "status": "finished", 00:15:34.756 "queue_depth": 1, 00:15:34.756 "io_size": 131072, 00:15:34.756 "runtime": 1.383441, 00:15:34.756 "iops": 9715.629361859306, 00:15:34.756 "mibps": 1214.4536702324133, 00:15:34.756 "io_failed": 1, 00:15:34.756 "io_timeout": 0, 00:15:34.756 "avg_latency_us": 145.08166587764268, 00:15:34.756 "min_latency_us": 43.985454545454544, 00:15:34.756 "max_latency_us": 1906.5018181818182 00:15:34.756 } 00:15:34.756 ], 00:15:34.756 "core_count": 1 00:15:34.756 } 00:15:34.756 [2024-11-25 12:14:30.551257] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:34.756 [2024-11-25 12:14:30.551357] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:34.756 [2024-11-25 12:14:30.551381] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:15:34.756 12:14:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.756 12:14:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71102 00:15:34.756 12:14:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 71102 ']' 00:15:34.756 12:14:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 71102 00:15:34.756 12:14:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:15:34.756 12:14:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:34.756 12:14:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71102 00:15:34.756 killing process with pid 71102 00:15:34.756 12:14:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:34.756 12:14:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:34.756 12:14:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71102' 00:15:34.756 12:14:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 71102 00:15:34.756 12:14:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 71102 00:15:34.756 [2024-11-25 12:14:30.586937] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:35.015 [2024-11-25 12:14:30.909209] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:36.388 12:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:15:36.388 12:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.m5icKDQXpm 00:15:36.388 12:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:15:36.388 ************************************ 00:15:36.388 END TEST raid_read_error_test 00:15:36.388 ************************************ 00:15:36.388 12:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:15:36.388 12:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:15:36.388 12:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:36.388 12:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:15:36.388 12:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:15:36.388 00:15:36.388 real 0m4.967s 00:15:36.388 user 0m6.015s 00:15:36.388 sys 0m0.637s 00:15:36.388 12:14:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:36.388 12:14:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.388 12:14:32 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:15:36.388 12:14:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:36.388 12:14:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:36.388 12:14:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:36.388 ************************************ 00:15:36.388 START TEST raid_write_error_test 00:15:36.388 ************************************ 00:15:36.388 12:14:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:15:36.388 12:14:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:15:36.388 12:14:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:15:36.388 12:14:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:15:36.388 12:14:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:15:36.388 12:14:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:36.388 12:14:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:15:36.388 12:14:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:36.388 12:14:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:36.388 12:14:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:15:36.388 12:14:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:36.388 12:14:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:36.388 12:14:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:15:36.388 12:14:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:36.388 12:14:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:36.388 12:14:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:15:36.388 12:14:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:36.388 12:14:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:36.388 12:14:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:36.388 12:14:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:15:36.388 12:14:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:15:36.388 12:14:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:15:36.388 12:14:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:15:36.388 12:14:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:15:36.388 12:14:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:15:36.388 12:14:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:15:36.388 12:14:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:15:36.388 12:14:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:15:36.388 12:14:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:15:36.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:36.388 12:14:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.6dHBXoXUKk 00:15:36.388 12:14:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71253 00:15:36.388 12:14:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71253 00:15:36.388 12:14:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 71253 ']' 00:15:36.388 12:14:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:36.388 12:14:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:36.388 12:14:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:36.388 12:14:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:36.388 12:14:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:36.388 12:14:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.388 [2024-11-25 12:14:32.278417] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:15:36.388 [2024-11-25 12:14:32.278591] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71253 ] 00:15:36.388 [2024-11-25 12:14:32.463092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:36.647 [2024-11-25 12:14:32.608208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.905 [2024-11-25 12:14:32.836947] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:36.905 [2024-11-25 12:14:32.837025] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:37.473 12:14:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:37.473 12:14:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:15:37.473 12:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:37.473 12:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:37.473 12:14:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.473 12:14:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.473 BaseBdev1_malloc 00:15:37.473 12:14:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.473 12:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:15:37.473 12:14:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.473 12:14:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.473 true 00:15:37.473 12:14:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.473 12:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:37.473 12:14:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.473 12:14:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.473 [2024-11-25 12:14:33.350772] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:37.473 [2024-11-25 12:14:33.351122] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:37.473 [2024-11-25 12:14:33.351173] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:37.473 [2024-11-25 12:14:33.351194] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:37.473 [2024-11-25 12:14:33.354231] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:37.473 BaseBdev1 00:15:37.473 [2024-11-25 12:14:33.354445] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:37.474 12:14:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.474 12:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:37.474 12:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:37.474 12:14:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.474 12:14:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.474 BaseBdev2_malloc 00:15:37.474 12:14:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.474 12:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:15:37.474 12:14:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.474 12:14:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.474 true 00:15:37.474 12:14:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.474 12:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:37.474 12:14:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.474 12:14:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.474 [2024-11-25 12:14:33.420192] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:37.474 [2024-11-25 12:14:33.420293] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:37.474 [2024-11-25 12:14:33.420326] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:37.474 [2024-11-25 12:14:33.420362] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:37.474 [2024-11-25 12:14:33.423577] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:37.474 [2024-11-25 12:14:33.423630] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:37.474 BaseBdev2 00:15:37.474 12:14:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.474 12:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:37.474 12:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:37.474 12:14:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.474 12:14:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.474 BaseBdev3_malloc 00:15:37.474 12:14:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.474 12:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:15:37.474 12:14:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.474 12:14:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.474 true 00:15:37.474 12:14:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.474 12:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:37.474 12:14:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.474 12:14:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.474 [2024-11-25 12:14:33.493101] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:37.474 [2024-11-25 12:14:33.493201] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:37.474 [2024-11-25 12:14:33.493236] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:37.474 [2024-11-25 12:14:33.493255] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:37.474 [2024-11-25 12:14:33.496434] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:37.474 [2024-11-25 12:14:33.496487] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:37.474 BaseBdev3 00:15:37.474 12:14:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.474 12:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:37.474 12:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:37.474 12:14:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.474 12:14:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.474 BaseBdev4_malloc 00:15:37.474 12:14:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.474 12:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:15:37.474 12:14:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.474 12:14:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.474 true 00:15:37.474 12:14:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.474 12:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:15:37.474 12:14:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.474 12:14:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.474 [2024-11-25 12:14:33.555417] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:15:37.474 [2024-11-25 12:14:33.555526] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:37.474 [2024-11-25 12:14:33.555560] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:37.474 [2024-11-25 12:14:33.555579] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:37.474 [2024-11-25 12:14:33.558802] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:37.474 [2024-11-25 12:14:33.559122] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:37.474 BaseBdev4 00:15:37.474 12:14:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.474 12:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:15:37.474 12:14:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.474 12:14:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.733 [2024-11-25 12:14:33.563574] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:37.733 [2024-11-25 12:14:33.566316] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:37.733 [2024-11-25 12:14:33.566682] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:37.733 [2024-11-25 12:14:33.566800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:37.733 [2024-11-25 12:14:33.567107] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:15:37.733 [2024-11-25 12:14:33.567135] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:37.733 [2024-11-25 12:14:33.567536] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:15:37.733 [2024-11-25 12:14:33.567779] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:15:37.733 [2024-11-25 12:14:33.567801] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:15:37.733 [2024-11-25 12:14:33.568083] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:37.733 12:14:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.733 12:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:15:37.733 12:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:37.733 12:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:37.733 12:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:37.733 12:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:37.733 12:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:37.733 12:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.733 12:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.733 12:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.733 12:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.733 12:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.733 12:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.733 12:14:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.733 12:14:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.733 12:14:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.733 12:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.733 "name": "raid_bdev1", 00:15:37.733 "uuid": "bb9a104e-4c69-4675-9bc0-ad8d694d6413", 00:15:37.733 "strip_size_kb": 64, 00:15:37.733 "state": "online", 00:15:37.733 "raid_level": "raid0", 00:15:37.733 "superblock": true, 00:15:37.733 "num_base_bdevs": 4, 00:15:37.733 "num_base_bdevs_discovered": 4, 00:15:37.733 "num_base_bdevs_operational": 4, 00:15:37.733 "base_bdevs_list": [ 00:15:37.733 { 00:15:37.733 "name": "BaseBdev1", 00:15:37.733 "uuid": "3b25d92d-fb53-53fd-b97d-d5da5d5a018c", 00:15:37.733 "is_configured": true, 00:15:37.733 "data_offset": 2048, 00:15:37.733 "data_size": 63488 00:15:37.733 }, 00:15:37.733 { 00:15:37.733 "name": "BaseBdev2", 00:15:37.733 "uuid": "f88dba2a-cadb-5eb7-a2cd-0cefc446aee7", 00:15:37.733 "is_configured": true, 00:15:37.733 "data_offset": 2048, 00:15:37.733 "data_size": 63488 00:15:37.733 }, 00:15:37.733 { 00:15:37.733 "name": "BaseBdev3", 00:15:37.733 "uuid": "239b52fd-b795-5387-ba73-08a749e43de6", 00:15:37.733 "is_configured": true, 00:15:37.733 "data_offset": 2048, 00:15:37.733 "data_size": 63488 00:15:37.733 }, 00:15:37.733 { 00:15:37.733 "name": "BaseBdev4", 00:15:37.733 "uuid": "743975e3-1260-5bc0-a890-71447058a8ab", 00:15:37.733 "is_configured": true, 00:15:37.733 "data_offset": 2048, 00:15:37.733 "data_size": 63488 00:15:37.733 } 00:15:37.733 ] 00:15:37.733 }' 00:15:37.733 12:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.733 12:14:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.992 12:14:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:15:37.992 12:14:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:38.251 [2024-11-25 12:14:34.209854] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:15:39.184 12:14:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:15:39.184 12:14:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.184 12:14:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.184 12:14:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.184 12:14:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:15:39.184 12:14:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:15:39.184 12:14:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:15:39.184 12:14:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:15:39.184 12:14:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:39.184 12:14:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:39.184 12:14:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:39.184 12:14:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:39.184 12:14:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:39.184 12:14:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.184 12:14:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.184 12:14:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.184 12:14:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.184 12:14:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.184 12:14:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.184 12:14:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.184 12:14:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.184 12:14:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.184 12:14:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.184 "name": "raid_bdev1", 00:15:39.185 "uuid": "bb9a104e-4c69-4675-9bc0-ad8d694d6413", 00:15:39.185 "strip_size_kb": 64, 00:15:39.185 "state": "online", 00:15:39.185 "raid_level": "raid0", 00:15:39.185 "superblock": true, 00:15:39.185 "num_base_bdevs": 4, 00:15:39.185 "num_base_bdevs_discovered": 4, 00:15:39.185 "num_base_bdevs_operational": 4, 00:15:39.185 "base_bdevs_list": [ 00:15:39.185 { 00:15:39.185 "name": "BaseBdev1", 00:15:39.185 "uuid": "3b25d92d-fb53-53fd-b97d-d5da5d5a018c", 00:15:39.185 "is_configured": true, 00:15:39.185 "data_offset": 2048, 00:15:39.185 "data_size": 63488 00:15:39.185 }, 00:15:39.185 { 00:15:39.185 "name": "BaseBdev2", 00:15:39.185 "uuid": "f88dba2a-cadb-5eb7-a2cd-0cefc446aee7", 00:15:39.185 "is_configured": true, 00:15:39.185 "data_offset": 2048, 00:15:39.185 "data_size": 63488 00:15:39.185 }, 00:15:39.185 { 00:15:39.185 "name": "BaseBdev3", 00:15:39.185 "uuid": "239b52fd-b795-5387-ba73-08a749e43de6", 00:15:39.185 "is_configured": true, 00:15:39.185 "data_offset": 2048, 00:15:39.185 "data_size": 63488 00:15:39.185 }, 00:15:39.185 { 00:15:39.185 "name": "BaseBdev4", 00:15:39.185 "uuid": "743975e3-1260-5bc0-a890-71447058a8ab", 00:15:39.185 "is_configured": true, 00:15:39.185 "data_offset": 2048, 00:15:39.185 "data_size": 63488 00:15:39.185 } 00:15:39.185 ] 00:15:39.185 }' 00:15:39.185 12:14:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.185 12:14:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.751 12:14:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:39.751 12:14:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.751 12:14:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.751 [2024-11-25 12:14:35.551651] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:39.751 [2024-11-25 12:14:35.551981] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:39.751 [2024-11-25 12:14:35.555627] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:39.751 [2024-11-25 12:14:35.555942] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:39.751 [2024-11-25 12:14:35.556126] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:39.751 [2024-11-25 12:14:35.556299] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:15:39.751 { 00:15:39.751 "results": [ 00:15:39.751 { 00:15:39.751 "job": "raid_bdev1", 00:15:39.751 "core_mask": "0x1", 00:15:39.751 "workload": "randrw", 00:15:39.751 "percentage": 50, 00:15:39.751 "status": "finished", 00:15:39.751 "queue_depth": 1, 00:15:39.751 "io_size": 131072, 00:15:39.751 "runtime": 1.339338, 00:15:39.751 "iops": 9544.267391801024, 00:15:39.751 "mibps": 1193.033423975128, 00:15:39.751 "io_failed": 1, 00:15:39.751 "io_timeout": 0, 00:15:39.751 "avg_latency_us": 147.67150756627603, 00:15:39.751 "min_latency_us": 40.49454545454545, 00:15:39.751 "max_latency_us": 1839.4763636363637 00:15:39.751 } 00:15:39.751 ], 00:15:39.751 "core_count": 1 00:15:39.751 } 00:15:39.751 12:14:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.751 12:14:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71253 00:15:39.751 12:14:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 71253 ']' 00:15:39.751 12:14:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 71253 00:15:39.751 12:14:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:15:39.751 12:14:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:39.751 12:14:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71253 00:15:39.751 killing process with pid 71253 00:15:39.751 12:14:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:39.751 12:14:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:39.751 12:14:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71253' 00:15:39.751 12:14:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 71253 00:15:39.751 12:14:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 71253 00:15:39.751 [2024-11-25 12:14:35.595679] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:40.010 [2024-11-25 12:14:35.915377] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:41.388 12:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.6dHBXoXUKk 00:15:41.388 12:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:15:41.388 12:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:15:41.388 12:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:15:41.388 12:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:15:41.388 12:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:41.388 12:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:15:41.388 12:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:15:41.388 00:15:41.388 real 0m4.962s 00:15:41.388 user 0m5.974s 00:15:41.388 sys 0m0.682s 00:15:41.388 12:14:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:41.388 ************************************ 00:15:41.388 END TEST raid_write_error_test 00:15:41.388 ************************************ 00:15:41.388 12:14:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.388 12:14:37 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:15:41.388 12:14:37 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:15:41.388 12:14:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:41.388 12:14:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:41.388 12:14:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:41.388 ************************************ 00:15:41.388 START TEST raid_state_function_test 00:15:41.388 ************************************ 00:15:41.388 12:14:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:15:41.388 12:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:15:41.388 12:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:41.388 12:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:41.388 12:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:41.388 12:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:41.388 12:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:41.388 12:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:41.388 12:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:41.388 12:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:41.388 12:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:41.388 12:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:41.388 12:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:41.388 12:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:41.388 12:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:41.388 12:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:41.388 12:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:41.388 12:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:41.389 12:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:41.389 12:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:41.389 Process raid pid: 71398 00:15:41.389 12:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:41.389 12:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:41.389 12:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:41.389 12:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:41.389 12:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:41.389 12:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:15:41.389 12:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:41.389 12:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:41.389 12:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:41.389 12:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:41.389 12:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71398 00:15:41.389 12:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71398' 00:15:41.389 12:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71398 00:15:41.389 12:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:41.389 12:14:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71398 ']' 00:15:41.389 12:14:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:41.389 12:14:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:41.389 12:14:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:41.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:41.389 12:14:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:41.389 12:14:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.389 [2024-11-25 12:14:37.271694] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:15:41.389 [2024-11-25 12:14:37.271862] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:41.389 [2024-11-25 12:14:37.452697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.647 [2024-11-25 12:14:37.599582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.906 [2024-11-25 12:14:37.825918] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:41.906 [2024-11-25 12:14:37.825993] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:42.474 12:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:42.474 12:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:15:42.474 12:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:42.474 12:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.474 12:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.474 [2024-11-25 12:14:38.307504] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:42.474 [2024-11-25 12:14:38.307604] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:42.474 [2024-11-25 12:14:38.307621] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:42.474 [2024-11-25 12:14:38.307639] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:42.474 [2024-11-25 12:14:38.307649] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:42.474 [2024-11-25 12:14:38.307664] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:42.474 [2024-11-25 12:14:38.307674] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:42.474 [2024-11-25 12:14:38.307688] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:42.474 12:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.474 12:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:42.474 12:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:42.474 12:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:42.474 12:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:42.474 12:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:42.474 12:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:42.474 12:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.474 12:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.474 12:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.474 12:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.474 12:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.474 12:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.474 12:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:42.474 12:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.474 12:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.474 12:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.474 "name": "Existed_Raid", 00:15:42.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.474 "strip_size_kb": 64, 00:15:42.474 "state": "configuring", 00:15:42.474 "raid_level": "concat", 00:15:42.474 "superblock": false, 00:15:42.474 "num_base_bdevs": 4, 00:15:42.474 "num_base_bdevs_discovered": 0, 00:15:42.474 "num_base_bdevs_operational": 4, 00:15:42.474 "base_bdevs_list": [ 00:15:42.474 { 00:15:42.474 "name": "BaseBdev1", 00:15:42.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.474 "is_configured": false, 00:15:42.474 "data_offset": 0, 00:15:42.474 "data_size": 0 00:15:42.474 }, 00:15:42.474 { 00:15:42.474 "name": "BaseBdev2", 00:15:42.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.474 "is_configured": false, 00:15:42.474 "data_offset": 0, 00:15:42.474 "data_size": 0 00:15:42.474 }, 00:15:42.474 { 00:15:42.474 "name": "BaseBdev3", 00:15:42.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.474 "is_configured": false, 00:15:42.474 "data_offset": 0, 00:15:42.474 "data_size": 0 00:15:42.474 }, 00:15:42.474 { 00:15:42.474 "name": "BaseBdev4", 00:15:42.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.474 "is_configured": false, 00:15:42.474 "data_offset": 0, 00:15:42.474 "data_size": 0 00:15:42.474 } 00:15:42.474 ] 00:15:42.474 }' 00:15:42.474 12:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.474 12:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.733 12:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:42.733 12:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.733 12:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.992 [2024-11-25 12:14:38.823657] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:42.992 [2024-11-25 12:14:38.823741] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:42.992 12:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.992 12:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:42.992 12:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.992 12:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.992 [2024-11-25 12:14:38.831585] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:42.992 [2024-11-25 12:14:38.831646] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:42.992 [2024-11-25 12:14:38.831661] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:42.992 [2024-11-25 12:14:38.831677] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:42.992 [2024-11-25 12:14:38.831687] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:42.992 [2024-11-25 12:14:38.831702] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:42.992 [2024-11-25 12:14:38.831712] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:42.992 [2024-11-25 12:14:38.831727] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:42.992 12:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.992 12:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:42.992 12:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.992 12:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.992 [2024-11-25 12:14:38.880485] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:42.992 BaseBdev1 00:15:42.992 12:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.992 12:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:42.992 12:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:42.992 12:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:42.992 12:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:42.992 12:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:42.992 12:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:42.992 12:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:42.993 12:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.993 12:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.993 12:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.993 12:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:42.993 12:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.993 12:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.993 [ 00:15:42.993 { 00:15:42.993 "name": "BaseBdev1", 00:15:42.993 "aliases": [ 00:15:42.993 "c3881bac-e41b-4669-9e00-fd24ce6f47d0" 00:15:42.993 ], 00:15:42.993 "product_name": "Malloc disk", 00:15:42.993 "block_size": 512, 00:15:42.993 "num_blocks": 65536, 00:15:42.993 "uuid": "c3881bac-e41b-4669-9e00-fd24ce6f47d0", 00:15:42.993 "assigned_rate_limits": { 00:15:42.993 "rw_ios_per_sec": 0, 00:15:42.993 "rw_mbytes_per_sec": 0, 00:15:42.993 "r_mbytes_per_sec": 0, 00:15:42.993 "w_mbytes_per_sec": 0 00:15:42.993 }, 00:15:42.993 "claimed": true, 00:15:42.993 "claim_type": "exclusive_write", 00:15:42.993 "zoned": false, 00:15:42.993 "supported_io_types": { 00:15:42.993 "read": true, 00:15:42.993 "write": true, 00:15:42.993 "unmap": true, 00:15:42.993 "flush": true, 00:15:42.993 "reset": true, 00:15:42.993 "nvme_admin": false, 00:15:42.993 "nvme_io": false, 00:15:42.993 "nvme_io_md": false, 00:15:42.993 "write_zeroes": true, 00:15:42.993 "zcopy": true, 00:15:42.993 "get_zone_info": false, 00:15:42.993 "zone_management": false, 00:15:42.993 "zone_append": false, 00:15:42.993 "compare": false, 00:15:42.993 "compare_and_write": false, 00:15:42.993 "abort": true, 00:15:42.993 "seek_hole": false, 00:15:42.993 "seek_data": false, 00:15:42.993 "copy": true, 00:15:42.993 "nvme_iov_md": false 00:15:42.993 }, 00:15:42.993 "memory_domains": [ 00:15:42.993 { 00:15:42.993 "dma_device_id": "system", 00:15:42.993 "dma_device_type": 1 00:15:42.993 }, 00:15:42.993 { 00:15:42.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:42.993 "dma_device_type": 2 00:15:42.993 } 00:15:42.993 ], 00:15:42.993 "driver_specific": {} 00:15:42.993 } 00:15:42.993 ] 00:15:42.993 12:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.993 12:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:42.993 12:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:42.993 12:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:42.993 12:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:42.993 12:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:42.993 12:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:42.993 12:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:42.993 12:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.993 12:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.993 12:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.993 12:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.993 12:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.993 12:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.993 12:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:42.993 12:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.993 12:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.993 12:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.993 "name": "Existed_Raid", 00:15:42.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.993 "strip_size_kb": 64, 00:15:42.993 "state": "configuring", 00:15:42.993 "raid_level": "concat", 00:15:42.993 "superblock": false, 00:15:42.993 "num_base_bdevs": 4, 00:15:42.993 "num_base_bdevs_discovered": 1, 00:15:42.993 "num_base_bdevs_operational": 4, 00:15:42.993 "base_bdevs_list": [ 00:15:42.993 { 00:15:42.993 "name": "BaseBdev1", 00:15:42.993 "uuid": "c3881bac-e41b-4669-9e00-fd24ce6f47d0", 00:15:42.993 "is_configured": true, 00:15:42.993 "data_offset": 0, 00:15:42.993 "data_size": 65536 00:15:42.993 }, 00:15:42.993 { 00:15:42.993 "name": "BaseBdev2", 00:15:42.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.993 "is_configured": false, 00:15:42.993 "data_offset": 0, 00:15:42.993 "data_size": 0 00:15:42.993 }, 00:15:42.993 { 00:15:42.993 "name": "BaseBdev3", 00:15:42.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.993 "is_configured": false, 00:15:42.993 "data_offset": 0, 00:15:42.993 "data_size": 0 00:15:42.993 }, 00:15:42.993 { 00:15:42.993 "name": "BaseBdev4", 00:15:42.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.993 "is_configured": false, 00:15:42.993 "data_offset": 0, 00:15:42.993 "data_size": 0 00:15:42.993 } 00:15:42.993 ] 00:15:42.993 }' 00:15:42.993 12:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.993 12:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.560 12:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:43.560 12:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.560 12:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.560 [2024-11-25 12:14:39.392746] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:43.560 [2024-11-25 12:14:39.392852] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:43.560 12:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.560 12:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:43.560 12:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.560 12:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.560 [2024-11-25 12:14:39.400769] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:43.560 [2024-11-25 12:14:39.403534] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:43.560 [2024-11-25 12:14:39.403590] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:43.560 [2024-11-25 12:14:39.403606] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:43.560 [2024-11-25 12:14:39.403623] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:43.560 [2024-11-25 12:14:39.403633] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:43.560 [2024-11-25 12:14:39.403648] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:43.560 12:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.560 12:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:43.560 12:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:43.560 12:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:43.560 12:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:43.560 12:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:43.560 12:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:43.560 12:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:43.560 12:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:43.560 12:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.560 12:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.560 12:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.560 12:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.560 12:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.560 12:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:43.560 12:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.560 12:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.560 12:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.560 12:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.560 "name": "Existed_Raid", 00:15:43.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.560 "strip_size_kb": 64, 00:15:43.560 "state": "configuring", 00:15:43.560 "raid_level": "concat", 00:15:43.560 "superblock": false, 00:15:43.560 "num_base_bdevs": 4, 00:15:43.560 "num_base_bdevs_discovered": 1, 00:15:43.560 "num_base_bdevs_operational": 4, 00:15:43.560 "base_bdevs_list": [ 00:15:43.560 { 00:15:43.560 "name": "BaseBdev1", 00:15:43.560 "uuid": "c3881bac-e41b-4669-9e00-fd24ce6f47d0", 00:15:43.560 "is_configured": true, 00:15:43.560 "data_offset": 0, 00:15:43.560 "data_size": 65536 00:15:43.560 }, 00:15:43.560 { 00:15:43.560 "name": "BaseBdev2", 00:15:43.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.560 "is_configured": false, 00:15:43.560 "data_offset": 0, 00:15:43.560 "data_size": 0 00:15:43.560 }, 00:15:43.560 { 00:15:43.560 "name": "BaseBdev3", 00:15:43.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.561 "is_configured": false, 00:15:43.561 "data_offset": 0, 00:15:43.561 "data_size": 0 00:15:43.561 }, 00:15:43.561 { 00:15:43.561 "name": "BaseBdev4", 00:15:43.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.561 "is_configured": false, 00:15:43.561 "data_offset": 0, 00:15:43.561 "data_size": 0 00:15:43.561 } 00:15:43.561 ] 00:15:43.561 }' 00:15:43.561 12:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.561 12:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.819 12:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:43.819 12:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.819 12:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.078 [2024-11-25 12:14:39.934961] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:44.078 BaseBdev2 00:15:44.078 12:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.078 12:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:44.078 12:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:44.078 12:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:44.078 12:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:44.078 12:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:44.078 12:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:44.078 12:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:44.078 12:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.078 12:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.078 12:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.078 12:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:44.078 12:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.078 12:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.078 [ 00:15:44.078 { 00:15:44.078 "name": "BaseBdev2", 00:15:44.078 "aliases": [ 00:15:44.078 "8a2c73e2-b8bd-460b-ad55-519273ea4738" 00:15:44.078 ], 00:15:44.078 "product_name": "Malloc disk", 00:15:44.078 "block_size": 512, 00:15:44.078 "num_blocks": 65536, 00:15:44.078 "uuid": "8a2c73e2-b8bd-460b-ad55-519273ea4738", 00:15:44.079 "assigned_rate_limits": { 00:15:44.079 "rw_ios_per_sec": 0, 00:15:44.079 "rw_mbytes_per_sec": 0, 00:15:44.079 "r_mbytes_per_sec": 0, 00:15:44.079 "w_mbytes_per_sec": 0 00:15:44.079 }, 00:15:44.079 "claimed": true, 00:15:44.079 "claim_type": "exclusive_write", 00:15:44.079 "zoned": false, 00:15:44.079 "supported_io_types": { 00:15:44.079 "read": true, 00:15:44.079 "write": true, 00:15:44.079 "unmap": true, 00:15:44.079 "flush": true, 00:15:44.079 "reset": true, 00:15:44.079 "nvme_admin": false, 00:15:44.079 "nvme_io": false, 00:15:44.079 "nvme_io_md": false, 00:15:44.079 "write_zeroes": true, 00:15:44.079 "zcopy": true, 00:15:44.079 "get_zone_info": false, 00:15:44.079 "zone_management": false, 00:15:44.079 "zone_append": false, 00:15:44.079 "compare": false, 00:15:44.079 "compare_and_write": false, 00:15:44.079 "abort": true, 00:15:44.079 "seek_hole": false, 00:15:44.079 "seek_data": false, 00:15:44.079 "copy": true, 00:15:44.079 "nvme_iov_md": false 00:15:44.079 }, 00:15:44.079 "memory_domains": [ 00:15:44.079 { 00:15:44.079 "dma_device_id": "system", 00:15:44.079 "dma_device_type": 1 00:15:44.079 }, 00:15:44.079 { 00:15:44.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:44.079 "dma_device_type": 2 00:15:44.079 } 00:15:44.079 ], 00:15:44.079 "driver_specific": {} 00:15:44.079 } 00:15:44.079 ] 00:15:44.079 12:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.079 12:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:44.079 12:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:44.079 12:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:44.079 12:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:44.079 12:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:44.079 12:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:44.079 12:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:44.079 12:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:44.079 12:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:44.079 12:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.079 12:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.079 12:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.079 12:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.079 12:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.079 12:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.079 12:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.079 12:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:44.079 12:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.079 12:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.079 "name": "Existed_Raid", 00:15:44.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.079 "strip_size_kb": 64, 00:15:44.079 "state": "configuring", 00:15:44.079 "raid_level": "concat", 00:15:44.079 "superblock": false, 00:15:44.079 "num_base_bdevs": 4, 00:15:44.079 "num_base_bdevs_discovered": 2, 00:15:44.079 "num_base_bdevs_operational": 4, 00:15:44.079 "base_bdevs_list": [ 00:15:44.079 { 00:15:44.079 "name": "BaseBdev1", 00:15:44.079 "uuid": "c3881bac-e41b-4669-9e00-fd24ce6f47d0", 00:15:44.079 "is_configured": true, 00:15:44.079 "data_offset": 0, 00:15:44.079 "data_size": 65536 00:15:44.079 }, 00:15:44.079 { 00:15:44.079 "name": "BaseBdev2", 00:15:44.079 "uuid": "8a2c73e2-b8bd-460b-ad55-519273ea4738", 00:15:44.079 "is_configured": true, 00:15:44.079 "data_offset": 0, 00:15:44.079 "data_size": 65536 00:15:44.079 }, 00:15:44.079 { 00:15:44.079 "name": "BaseBdev3", 00:15:44.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.079 "is_configured": false, 00:15:44.079 "data_offset": 0, 00:15:44.079 "data_size": 0 00:15:44.079 }, 00:15:44.079 { 00:15:44.079 "name": "BaseBdev4", 00:15:44.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.079 "is_configured": false, 00:15:44.079 "data_offset": 0, 00:15:44.079 "data_size": 0 00:15:44.079 } 00:15:44.079 ] 00:15:44.079 }' 00:15:44.079 12:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.079 12:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.646 12:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:44.646 12:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.646 12:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.646 [2024-11-25 12:14:40.506485] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:44.646 BaseBdev3 00:15:44.646 12:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.646 12:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:44.646 12:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:44.646 12:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:44.646 12:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:44.646 12:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:44.646 12:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:44.646 12:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:44.646 12:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.646 12:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.646 12:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.646 12:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:44.646 12:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.646 12:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.646 [ 00:15:44.646 { 00:15:44.646 "name": "BaseBdev3", 00:15:44.646 "aliases": [ 00:15:44.646 "7ff78d3b-3423-418e-bbf2-a22b00e625c9" 00:15:44.646 ], 00:15:44.646 "product_name": "Malloc disk", 00:15:44.646 "block_size": 512, 00:15:44.646 "num_blocks": 65536, 00:15:44.646 "uuid": "7ff78d3b-3423-418e-bbf2-a22b00e625c9", 00:15:44.646 "assigned_rate_limits": { 00:15:44.646 "rw_ios_per_sec": 0, 00:15:44.646 "rw_mbytes_per_sec": 0, 00:15:44.646 "r_mbytes_per_sec": 0, 00:15:44.646 "w_mbytes_per_sec": 0 00:15:44.646 }, 00:15:44.646 "claimed": true, 00:15:44.646 "claim_type": "exclusive_write", 00:15:44.646 "zoned": false, 00:15:44.647 "supported_io_types": { 00:15:44.647 "read": true, 00:15:44.647 "write": true, 00:15:44.647 "unmap": true, 00:15:44.647 "flush": true, 00:15:44.647 "reset": true, 00:15:44.647 "nvme_admin": false, 00:15:44.647 "nvme_io": false, 00:15:44.647 "nvme_io_md": false, 00:15:44.647 "write_zeroes": true, 00:15:44.647 "zcopy": true, 00:15:44.647 "get_zone_info": false, 00:15:44.647 "zone_management": false, 00:15:44.647 "zone_append": false, 00:15:44.647 "compare": false, 00:15:44.647 "compare_and_write": false, 00:15:44.647 "abort": true, 00:15:44.647 "seek_hole": false, 00:15:44.647 "seek_data": false, 00:15:44.647 "copy": true, 00:15:44.647 "nvme_iov_md": false 00:15:44.647 }, 00:15:44.647 "memory_domains": [ 00:15:44.647 { 00:15:44.647 "dma_device_id": "system", 00:15:44.647 "dma_device_type": 1 00:15:44.647 }, 00:15:44.647 { 00:15:44.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:44.647 "dma_device_type": 2 00:15:44.647 } 00:15:44.647 ], 00:15:44.647 "driver_specific": {} 00:15:44.647 } 00:15:44.647 ] 00:15:44.647 12:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.647 12:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:44.647 12:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:44.647 12:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:44.647 12:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:44.647 12:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:44.647 12:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:44.647 12:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:44.647 12:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:44.647 12:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:44.647 12:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.647 12:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.647 12:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.647 12:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.647 12:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.647 12:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:44.647 12:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.647 12:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.647 12:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.647 12:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.647 "name": "Existed_Raid", 00:15:44.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.647 "strip_size_kb": 64, 00:15:44.647 "state": "configuring", 00:15:44.647 "raid_level": "concat", 00:15:44.647 "superblock": false, 00:15:44.647 "num_base_bdevs": 4, 00:15:44.647 "num_base_bdevs_discovered": 3, 00:15:44.647 "num_base_bdevs_operational": 4, 00:15:44.647 "base_bdevs_list": [ 00:15:44.647 { 00:15:44.647 "name": "BaseBdev1", 00:15:44.647 "uuid": "c3881bac-e41b-4669-9e00-fd24ce6f47d0", 00:15:44.647 "is_configured": true, 00:15:44.647 "data_offset": 0, 00:15:44.647 "data_size": 65536 00:15:44.647 }, 00:15:44.647 { 00:15:44.647 "name": "BaseBdev2", 00:15:44.647 "uuid": "8a2c73e2-b8bd-460b-ad55-519273ea4738", 00:15:44.647 "is_configured": true, 00:15:44.647 "data_offset": 0, 00:15:44.647 "data_size": 65536 00:15:44.647 }, 00:15:44.647 { 00:15:44.647 "name": "BaseBdev3", 00:15:44.647 "uuid": "7ff78d3b-3423-418e-bbf2-a22b00e625c9", 00:15:44.647 "is_configured": true, 00:15:44.647 "data_offset": 0, 00:15:44.647 "data_size": 65536 00:15:44.647 }, 00:15:44.647 { 00:15:44.647 "name": "BaseBdev4", 00:15:44.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.647 "is_configured": false, 00:15:44.647 "data_offset": 0, 00:15:44.647 "data_size": 0 00:15:44.647 } 00:15:44.647 ] 00:15:44.647 }' 00:15:44.647 12:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.647 12:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.218 12:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:45.218 12:14:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.218 12:14:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.218 [2024-11-25 12:14:41.140990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:45.218 [2024-11-25 12:14:41.141073] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:45.218 [2024-11-25 12:14:41.141089] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:15:45.218 [2024-11-25 12:14:41.141483] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:45.218 [2024-11-25 12:14:41.141713] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:45.218 [2024-11-25 12:14:41.141744] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:45.218 [2024-11-25 12:14:41.142115] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:45.218 BaseBdev4 00:15:45.218 12:14:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.218 12:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:45.218 12:14:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:45.218 12:14:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:45.218 12:14:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:45.218 12:14:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:45.218 12:14:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:45.218 12:14:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:45.218 12:14:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.218 12:14:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.218 12:14:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.218 12:14:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:45.218 12:14:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.218 12:14:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.218 [ 00:15:45.218 { 00:15:45.218 "name": "BaseBdev4", 00:15:45.218 "aliases": [ 00:15:45.218 "b67ea91c-f81e-4712-8009-1bc1df025e84" 00:15:45.218 ], 00:15:45.218 "product_name": "Malloc disk", 00:15:45.218 "block_size": 512, 00:15:45.218 "num_blocks": 65536, 00:15:45.218 "uuid": "b67ea91c-f81e-4712-8009-1bc1df025e84", 00:15:45.218 "assigned_rate_limits": { 00:15:45.218 "rw_ios_per_sec": 0, 00:15:45.218 "rw_mbytes_per_sec": 0, 00:15:45.218 "r_mbytes_per_sec": 0, 00:15:45.218 "w_mbytes_per_sec": 0 00:15:45.218 }, 00:15:45.218 "claimed": true, 00:15:45.218 "claim_type": "exclusive_write", 00:15:45.218 "zoned": false, 00:15:45.218 "supported_io_types": { 00:15:45.218 "read": true, 00:15:45.218 "write": true, 00:15:45.218 "unmap": true, 00:15:45.218 "flush": true, 00:15:45.218 "reset": true, 00:15:45.218 "nvme_admin": false, 00:15:45.218 "nvme_io": false, 00:15:45.218 "nvme_io_md": false, 00:15:45.218 "write_zeroes": true, 00:15:45.218 "zcopy": true, 00:15:45.218 "get_zone_info": false, 00:15:45.218 "zone_management": false, 00:15:45.218 "zone_append": false, 00:15:45.218 "compare": false, 00:15:45.218 "compare_and_write": false, 00:15:45.218 "abort": true, 00:15:45.218 "seek_hole": false, 00:15:45.218 "seek_data": false, 00:15:45.218 "copy": true, 00:15:45.218 "nvme_iov_md": false 00:15:45.218 }, 00:15:45.218 "memory_domains": [ 00:15:45.218 { 00:15:45.218 "dma_device_id": "system", 00:15:45.218 "dma_device_type": 1 00:15:45.218 }, 00:15:45.218 { 00:15:45.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.218 "dma_device_type": 2 00:15:45.218 } 00:15:45.218 ], 00:15:45.218 "driver_specific": {} 00:15:45.218 } 00:15:45.218 ] 00:15:45.218 12:14:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.218 12:14:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:45.218 12:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:45.218 12:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:45.218 12:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:15:45.218 12:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:45.218 12:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:45.218 12:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:45.218 12:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:45.218 12:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:45.218 12:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.218 12:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.218 12:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.218 12:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.218 12:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.218 12:14:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.218 12:14:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.218 12:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.218 12:14:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.218 12:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.218 "name": "Existed_Raid", 00:15:45.218 "uuid": "f9eefe3f-ae37-4b6b-acc0-9ca6bfbfaa1c", 00:15:45.218 "strip_size_kb": 64, 00:15:45.218 "state": "online", 00:15:45.218 "raid_level": "concat", 00:15:45.218 "superblock": false, 00:15:45.218 "num_base_bdevs": 4, 00:15:45.218 "num_base_bdevs_discovered": 4, 00:15:45.218 "num_base_bdevs_operational": 4, 00:15:45.218 "base_bdevs_list": [ 00:15:45.218 { 00:15:45.218 "name": "BaseBdev1", 00:15:45.218 "uuid": "c3881bac-e41b-4669-9e00-fd24ce6f47d0", 00:15:45.218 "is_configured": true, 00:15:45.218 "data_offset": 0, 00:15:45.218 "data_size": 65536 00:15:45.218 }, 00:15:45.218 { 00:15:45.218 "name": "BaseBdev2", 00:15:45.218 "uuid": "8a2c73e2-b8bd-460b-ad55-519273ea4738", 00:15:45.218 "is_configured": true, 00:15:45.218 "data_offset": 0, 00:15:45.218 "data_size": 65536 00:15:45.218 }, 00:15:45.218 { 00:15:45.218 "name": "BaseBdev3", 00:15:45.218 "uuid": "7ff78d3b-3423-418e-bbf2-a22b00e625c9", 00:15:45.218 "is_configured": true, 00:15:45.218 "data_offset": 0, 00:15:45.218 "data_size": 65536 00:15:45.218 }, 00:15:45.218 { 00:15:45.219 "name": "BaseBdev4", 00:15:45.219 "uuid": "b67ea91c-f81e-4712-8009-1bc1df025e84", 00:15:45.219 "is_configured": true, 00:15:45.219 "data_offset": 0, 00:15:45.219 "data_size": 65536 00:15:45.219 } 00:15:45.219 ] 00:15:45.219 }' 00:15:45.219 12:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.219 12:14:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.826 12:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:45.826 12:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:45.826 12:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:45.826 12:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:45.826 12:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:45.826 12:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:45.826 12:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:45.826 12:14:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.826 12:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:45.826 12:14:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.826 [2024-11-25 12:14:41.657751] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:45.826 12:14:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.826 12:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:45.826 "name": "Existed_Raid", 00:15:45.826 "aliases": [ 00:15:45.826 "f9eefe3f-ae37-4b6b-acc0-9ca6bfbfaa1c" 00:15:45.826 ], 00:15:45.826 "product_name": "Raid Volume", 00:15:45.826 "block_size": 512, 00:15:45.826 "num_blocks": 262144, 00:15:45.826 "uuid": "f9eefe3f-ae37-4b6b-acc0-9ca6bfbfaa1c", 00:15:45.826 "assigned_rate_limits": { 00:15:45.826 "rw_ios_per_sec": 0, 00:15:45.826 "rw_mbytes_per_sec": 0, 00:15:45.826 "r_mbytes_per_sec": 0, 00:15:45.826 "w_mbytes_per_sec": 0 00:15:45.826 }, 00:15:45.826 "claimed": false, 00:15:45.827 "zoned": false, 00:15:45.827 "supported_io_types": { 00:15:45.827 "read": true, 00:15:45.827 "write": true, 00:15:45.827 "unmap": true, 00:15:45.827 "flush": true, 00:15:45.827 "reset": true, 00:15:45.827 "nvme_admin": false, 00:15:45.827 "nvme_io": false, 00:15:45.827 "nvme_io_md": false, 00:15:45.827 "write_zeroes": true, 00:15:45.827 "zcopy": false, 00:15:45.827 "get_zone_info": false, 00:15:45.827 "zone_management": false, 00:15:45.827 "zone_append": false, 00:15:45.827 "compare": false, 00:15:45.827 "compare_and_write": false, 00:15:45.827 "abort": false, 00:15:45.827 "seek_hole": false, 00:15:45.827 "seek_data": false, 00:15:45.827 "copy": false, 00:15:45.827 "nvme_iov_md": false 00:15:45.827 }, 00:15:45.827 "memory_domains": [ 00:15:45.827 { 00:15:45.827 "dma_device_id": "system", 00:15:45.827 "dma_device_type": 1 00:15:45.827 }, 00:15:45.827 { 00:15:45.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.827 "dma_device_type": 2 00:15:45.827 }, 00:15:45.827 { 00:15:45.827 "dma_device_id": "system", 00:15:45.827 "dma_device_type": 1 00:15:45.827 }, 00:15:45.827 { 00:15:45.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.827 "dma_device_type": 2 00:15:45.827 }, 00:15:45.827 { 00:15:45.827 "dma_device_id": "system", 00:15:45.827 "dma_device_type": 1 00:15:45.827 }, 00:15:45.827 { 00:15:45.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.827 "dma_device_type": 2 00:15:45.827 }, 00:15:45.827 { 00:15:45.827 "dma_device_id": "system", 00:15:45.827 "dma_device_type": 1 00:15:45.827 }, 00:15:45.827 { 00:15:45.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.827 "dma_device_type": 2 00:15:45.827 } 00:15:45.827 ], 00:15:45.827 "driver_specific": { 00:15:45.827 "raid": { 00:15:45.827 "uuid": "f9eefe3f-ae37-4b6b-acc0-9ca6bfbfaa1c", 00:15:45.827 "strip_size_kb": 64, 00:15:45.827 "state": "online", 00:15:45.827 "raid_level": "concat", 00:15:45.827 "superblock": false, 00:15:45.827 "num_base_bdevs": 4, 00:15:45.827 "num_base_bdevs_discovered": 4, 00:15:45.827 "num_base_bdevs_operational": 4, 00:15:45.827 "base_bdevs_list": [ 00:15:45.827 { 00:15:45.827 "name": "BaseBdev1", 00:15:45.827 "uuid": "c3881bac-e41b-4669-9e00-fd24ce6f47d0", 00:15:45.827 "is_configured": true, 00:15:45.827 "data_offset": 0, 00:15:45.827 "data_size": 65536 00:15:45.827 }, 00:15:45.827 { 00:15:45.827 "name": "BaseBdev2", 00:15:45.827 "uuid": "8a2c73e2-b8bd-460b-ad55-519273ea4738", 00:15:45.827 "is_configured": true, 00:15:45.827 "data_offset": 0, 00:15:45.827 "data_size": 65536 00:15:45.827 }, 00:15:45.827 { 00:15:45.827 "name": "BaseBdev3", 00:15:45.827 "uuid": "7ff78d3b-3423-418e-bbf2-a22b00e625c9", 00:15:45.827 "is_configured": true, 00:15:45.827 "data_offset": 0, 00:15:45.827 "data_size": 65536 00:15:45.827 }, 00:15:45.827 { 00:15:45.827 "name": "BaseBdev4", 00:15:45.827 "uuid": "b67ea91c-f81e-4712-8009-1bc1df025e84", 00:15:45.827 "is_configured": true, 00:15:45.827 "data_offset": 0, 00:15:45.827 "data_size": 65536 00:15:45.827 } 00:15:45.827 ] 00:15:45.827 } 00:15:45.827 } 00:15:45.827 }' 00:15:45.827 12:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:45.827 12:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:45.827 BaseBdev2 00:15:45.827 BaseBdev3 00:15:45.827 BaseBdev4' 00:15:45.827 12:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:45.827 12:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:45.827 12:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:45.827 12:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:45.827 12:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:45.827 12:14:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.827 12:14:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.827 12:14:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.827 12:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:45.827 12:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:45.827 12:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:45.827 12:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:45.827 12:14:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.827 12:14:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.827 12:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:45.827 12:14:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.827 12:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:45.827 12:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:45.827 12:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:45.827 12:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:45.827 12:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:45.827 12:14:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.827 12:14:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.086 12:14:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.086 12:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:46.086 12:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:46.086 12:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:46.086 12:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:46.086 12:14:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.086 12:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:46.086 12:14:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.086 12:14:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.086 12:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:46.086 12:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:46.086 12:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:46.086 12:14:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.086 12:14:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.086 [2024-11-25 12:14:42.021484] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:46.086 [2024-11-25 12:14:42.021552] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:46.086 [2024-11-25 12:14:42.021631] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:46.086 12:14:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.086 12:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:46.087 12:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:15:46.087 12:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:46.087 12:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:15:46.087 12:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:15:46.087 12:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:15:46.087 12:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:46.087 12:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:15:46.087 12:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:46.087 12:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:46.087 12:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:46.087 12:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.087 12:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.087 12:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.087 12:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.087 12:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.087 12:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:46.087 12:14:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.087 12:14:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.087 12:14:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.087 12:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.087 "name": "Existed_Raid", 00:15:46.087 "uuid": "f9eefe3f-ae37-4b6b-acc0-9ca6bfbfaa1c", 00:15:46.087 "strip_size_kb": 64, 00:15:46.087 "state": "offline", 00:15:46.087 "raid_level": "concat", 00:15:46.087 "superblock": false, 00:15:46.087 "num_base_bdevs": 4, 00:15:46.087 "num_base_bdevs_discovered": 3, 00:15:46.087 "num_base_bdevs_operational": 3, 00:15:46.087 "base_bdevs_list": [ 00:15:46.087 { 00:15:46.087 "name": null, 00:15:46.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.087 "is_configured": false, 00:15:46.087 "data_offset": 0, 00:15:46.087 "data_size": 65536 00:15:46.087 }, 00:15:46.087 { 00:15:46.087 "name": "BaseBdev2", 00:15:46.087 "uuid": "8a2c73e2-b8bd-460b-ad55-519273ea4738", 00:15:46.087 "is_configured": true, 00:15:46.087 "data_offset": 0, 00:15:46.087 "data_size": 65536 00:15:46.087 }, 00:15:46.087 { 00:15:46.087 "name": "BaseBdev3", 00:15:46.087 "uuid": "7ff78d3b-3423-418e-bbf2-a22b00e625c9", 00:15:46.087 "is_configured": true, 00:15:46.087 "data_offset": 0, 00:15:46.087 "data_size": 65536 00:15:46.087 }, 00:15:46.087 { 00:15:46.087 "name": "BaseBdev4", 00:15:46.087 "uuid": "b67ea91c-f81e-4712-8009-1bc1df025e84", 00:15:46.087 "is_configured": true, 00:15:46.087 "data_offset": 0, 00:15:46.087 "data_size": 65536 00:15:46.087 } 00:15:46.087 ] 00:15:46.087 }' 00:15:46.087 12:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.087 12:14:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.654 12:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:46.654 12:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:46.654 12:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:46.654 12:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.654 12:14:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.654 12:14:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.654 12:14:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.654 12:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:46.654 12:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:46.654 12:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:46.654 12:14:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.654 12:14:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.654 [2024-11-25 12:14:42.642899] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:46.654 12:14:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.654 12:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:46.654 12:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:46.654 12:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.654 12:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:46.654 12:14:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.654 12:14:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.912 12:14:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.912 12:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:46.912 12:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:46.912 12:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:46.912 12:14:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.912 12:14:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.912 [2024-11-25 12:14:42.796079] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:46.912 12:14:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.912 12:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:46.912 12:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:46.912 12:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.912 12:14:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.912 12:14:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.912 12:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:46.912 12:14:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.912 12:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:46.912 12:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:46.912 12:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:46.912 12:14:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.912 12:14:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.912 [2024-11-25 12:14:42.944744] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:46.912 [2024-11-25 12:14:42.944862] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:47.171 12:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.171 12:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:47.171 12:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:47.171 12:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.171 12:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:47.171 12:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.171 12:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.171 12:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.171 12:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:47.171 12:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:47.171 12:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:47.171 12:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:47.171 12:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:47.171 12:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:47.171 12:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.171 12:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.171 BaseBdev2 00:15:47.171 12:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.171 12:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:47.171 12:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:47.171 12:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:47.171 12:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:47.171 12:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:47.171 12:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:47.171 12:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:47.171 12:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.172 12:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.172 12:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.172 12:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:47.172 12:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.172 12:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.172 [ 00:15:47.172 { 00:15:47.172 "name": "BaseBdev2", 00:15:47.172 "aliases": [ 00:15:47.172 "1bb83140-5cb5-4f64-8da3-7d64e9dec69f" 00:15:47.172 ], 00:15:47.172 "product_name": "Malloc disk", 00:15:47.172 "block_size": 512, 00:15:47.172 "num_blocks": 65536, 00:15:47.172 "uuid": "1bb83140-5cb5-4f64-8da3-7d64e9dec69f", 00:15:47.172 "assigned_rate_limits": { 00:15:47.172 "rw_ios_per_sec": 0, 00:15:47.172 "rw_mbytes_per_sec": 0, 00:15:47.172 "r_mbytes_per_sec": 0, 00:15:47.172 "w_mbytes_per_sec": 0 00:15:47.172 }, 00:15:47.172 "claimed": false, 00:15:47.172 "zoned": false, 00:15:47.172 "supported_io_types": { 00:15:47.172 "read": true, 00:15:47.172 "write": true, 00:15:47.172 "unmap": true, 00:15:47.172 "flush": true, 00:15:47.172 "reset": true, 00:15:47.172 "nvme_admin": false, 00:15:47.172 "nvme_io": false, 00:15:47.172 "nvme_io_md": false, 00:15:47.172 "write_zeroes": true, 00:15:47.172 "zcopy": true, 00:15:47.172 "get_zone_info": false, 00:15:47.172 "zone_management": false, 00:15:47.172 "zone_append": false, 00:15:47.172 "compare": false, 00:15:47.172 "compare_and_write": false, 00:15:47.172 "abort": true, 00:15:47.172 "seek_hole": false, 00:15:47.172 "seek_data": false, 00:15:47.172 "copy": true, 00:15:47.172 "nvme_iov_md": false 00:15:47.172 }, 00:15:47.172 "memory_domains": [ 00:15:47.172 { 00:15:47.172 "dma_device_id": "system", 00:15:47.172 "dma_device_type": 1 00:15:47.172 }, 00:15:47.172 { 00:15:47.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.172 "dma_device_type": 2 00:15:47.172 } 00:15:47.172 ], 00:15:47.172 "driver_specific": {} 00:15:47.172 } 00:15:47.172 ] 00:15:47.172 12:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.172 12:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:47.172 12:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:47.172 12:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:47.172 12:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:47.172 12:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.172 12:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.172 BaseBdev3 00:15:47.172 12:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.172 12:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:47.172 12:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:47.172 12:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:47.172 12:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:47.172 12:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:47.172 12:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:47.172 12:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:47.172 12:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.172 12:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.172 12:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.172 12:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:47.172 12:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.172 12:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.172 [ 00:15:47.172 { 00:15:47.172 "name": "BaseBdev3", 00:15:47.172 "aliases": [ 00:15:47.172 "23f792b8-952d-46c1-8c29-7df32b3efcc5" 00:15:47.172 ], 00:15:47.172 "product_name": "Malloc disk", 00:15:47.172 "block_size": 512, 00:15:47.172 "num_blocks": 65536, 00:15:47.172 "uuid": "23f792b8-952d-46c1-8c29-7df32b3efcc5", 00:15:47.172 "assigned_rate_limits": { 00:15:47.172 "rw_ios_per_sec": 0, 00:15:47.172 "rw_mbytes_per_sec": 0, 00:15:47.172 "r_mbytes_per_sec": 0, 00:15:47.172 "w_mbytes_per_sec": 0 00:15:47.172 }, 00:15:47.172 "claimed": false, 00:15:47.172 "zoned": false, 00:15:47.172 "supported_io_types": { 00:15:47.172 "read": true, 00:15:47.172 "write": true, 00:15:47.172 "unmap": true, 00:15:47.172 "flush": true, 00:15:47.172 "reset": true, 00:15:47.172 "nvme_admin": false, 00:15:47.172 "nvme_io": false, 00:15:47.172 "nvme_io_md": false, 00:15:47.172 "write_zeroes": true, 00:15:47.172 "zcopy": true, 00:15:47.172 "get_zone_info": false, 00:15:47.172 "zone_management": false, 00:15:47.172 "zone_append": false, 00:15:47.172 "compare": false, 00:15:47.172 "compare_and_write": false, 00:15:47.172 "abort": true, 00:15:47.172 "seek_hole": false, 00:15:47.172 "seek_data": false, 00:15:47.172 "copy": true, 00:15:47.172 "nvme_iov_md": false 00:15:47.172 }, 00:15:47.172 "memory_domains": [ 00:15:47.172 { 00:15:47.172 "dma_device_id": "system", 00:15:47.172 "dma_device_type": 1 00:15:47.172 }, 00:15:47.172 { 00:15:47.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.172 "dma_device_type": 2 00:15:47.172 } 00:15:47.172 ], 00:15:47.172 "driver_specific": {} 00:15:47.172 } 00:15:47.172 ] 00:15:47.172 12:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.172 12:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:47.172 12:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:47.172 12:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:47.172 12:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:47.172 12:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.172 12:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.430 BaseBdev4 00:15:47.430 12:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.430 12:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:47.430 12:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:47.430 12:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:47.430 12:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:47.430 12:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:47.430 12:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:47.430 12:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:47.431 12:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.431 12:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.431 12:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.431 12:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:47.431 12:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.431 12:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.431 [ 00:15:47.431 { 00:15:47.431 "name": "BaseBdev4", 00:15:47.431 "aliases": [ 00:15:47.431 "528ad582-69c7-4709-bd39-79830d22f1b8" 00:15:47.431 ], 00:15:47.431 "product_name": "Malloc disk", 00:15:47.431 "block_size": 512, 00:15:47.431 "num_blocks": 65536, 00:15:47.431 "uuid": "528ad582-69c7-4709-bd39-79830d22f1b8", 00:15:47.431 "assigned_rate_limits": { 00:15:47.431 "rw_ios_per_sec": 0, 00:15:47.431 "rw_mbytes_per_sec": 0, 00:15:47.431 "r_mbytes_per_sec": 0, 00:15:47.431 "w_mbytes_per_sec": 0 00:15:47.431 }, 00:15:47.431 "claimed": false, 00:15:47.431 "zoned": false, 00:15:47.431 "supported_io_types": { 00:15:47.431 "read": true, 00:15:47.431 "write": true, 00:15:47.431 "unmap": true, 00:15:47.431 "flush": true, 00:15:47.431 "reset": true, 00:15:47.431 "nvme_admin": false, 00:15:47.431 "nvme_io": false, 00:15:47.431 "nvme_io_md": false, 00:15:47.431 "write_zeroes": true, 00:15:47.431 "zcopy": true, 00:15:47.431 "get_zone_info": false, 00:15:47.431 "zone_management": false, 00:15:47.431 "zone_append": false, 00:15:47.431 "compare": false, 00:15:47.431 "compare_and_write": false, 00:15:47.431 "abort": true, 00:15:47.431 "seek_hole": false, 00:15:47.431 "seek_data": false, 00:15:47.431 "copy": true, 00:15:47.431 "nvme_iov_md": false 00:15:47.431 }, 00:15:47.431 "memory_domains": [ 00:15:47.431 { 00:15:47.431 "dma_device_id": "system", 00:15:47.431 "dma_device_type": 1 00:15:47.431 }, 00:15:47.431 { 00:15:47.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.431 "dma_device_type": 2 00:15:47.431 } 00:15:47.431 ], 00:15:47.431 "driver_specific": {} 00:15:47.431 } 00:15:47.431 ] 00:15:47.431 12:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.431 12:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:47.431 12:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:47.431 12:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:47.431 12:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:47.431 12:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.431 12:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.431 [2024-11-25 12:14:43.338310] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:47.431 [2024-11-25 12:14:43.338420] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:47.431 [2024-11-25 12:14:43.338455] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:47.431 [2024-11-25 12:14:43.341273] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:47.431 [2024-11-25 12:14:43.341367] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:47.431 12:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.431 12:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:47.431 12:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:47.431 12:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:47.431 12:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:47.431 12:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:47.431 12:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:47.431 12:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.431 12:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.431 12:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.431 12:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.431 12:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.431 12:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:47.431 12:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.431 12:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.431 12:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.431 12:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.431 "name": "Existed_Raid", 00:15:47.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.431 "strip_size_kb": 64, 00:15:47.431 "state": "configuring", 00:15:47.431 "raid_level": "concat", 00:15:47.431 "superblock": false, 00:15:47.431 "num_base_bdevs": 4, 00:15:47.431 "num_base_bdevs_discovered": 3, 00:15:47.431 "num_base_bdevs_operational": 4, 00:15:47.431 "base_bdevs_list": [ 00:15:47.431 { 00:15:47.431 "name": "BaseBdev1", 00:15:47.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.431 "is_configured": false, 00:15:47.431 "data_offset": 0, 00:15:47.431 "data_size": 0 00:15:47.431 }, 00:15:47.431 { 00:15:47.431 "name": "BaseBdev2", 00:15:47.431 "uuid": "1bb83140-5cb5-4f64-8da3-7d64e9dec69f", 00:15:47.431 "is_configured": true, 00:15:47.431 "data_offset": 0, 00:15:47.431 "data_size": 65536 00:15:47.431 }, 00:15:47.431 { 00:15:47.431 "name": "BaseBdev3", 00:15:47.431 "uuid": "23f792b8-952d-46c1-8c29-7df32b3efcc5", 00:15:47.431 "is_configured": true, 00:15:47.431 "data_offset": 0, 00:15:47.431 "data_size": 65536 00:15:47.431 }, 00:15:47.431 { 00:15:47.431 "name": "BaseBdev4", 00:15:47.431 "uuid": "528ad582-69c7-4709-bd39-79830d22f1b8", 00:15:47.431 "is_configured": true, 00:15:47.431 "data_offset": 0, 00:15:47.431 "data_size": 65536 00:15:47.431 } 00:15:47.431 ] 00:15:47.431 }' 00:15:47.431 12:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.431 12:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.999 12:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:47.999 12:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.999 12:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.999 [2024-11-25 12:14:43.842518] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:47.999 12:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.999 12:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:47.999 12:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:47.999 12:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:47.999 12:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:47.999 12:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:47.999 12:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:47.999 12:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.999 12:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.999 12:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.999 12:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.999 12:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.999 12:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.999 12:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.999 12:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:47.999 12:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.999 12:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.999 "name": "Existed_Raid", 00:15:47.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.999 "strip_size_kb": 64, 00:15:47.999 "state": "configuring", 00:15:47.999 "raid_level": "concat", 00:15:47.999 "superblock": false, 00:15:47.999 "num_base_bdevs": 4, 00:15:47.999 "num_base_bdevs_discovered": 2, 00:15:47.999 "num_base_bdevs_operational": 4, 00:15:47.999 "base_bdevs_list": [ 00:15:47.999 { 00:15:47.999 "name": "BaseBdev1", 00:15:47.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.999 "is_configured": false, 00:15:47.999 "data_offset": 0, 00:15:47.999 "data_size": 0 00:15:47.999 }, 00:15:47.999 { 00:15:47.999 "name": null, 00:15:47.999 "uuid": "1bb83140-5cb5-4f64-8da3-7d64e9dec69f", 00:15:47.999 "is_configured": false, 00:15:47.999 "data_offset": 0, 00:15:47.999 "data_size": 65536 00:15:47.999 }, 00:15:47.999 { 00:15:47.999 "name": "BaseBdev3", 00:15:47.999 "uuid": "23f792b8-952d-46c1-8c29-7df32b3efcc5", 00:15:47.999 "is_configured": true, 00:15:47.999 "data_offset": 0, 00:15:47.999 "data_size": 65536 00:15:47.999 }, 00:15:47.999 { 00:15:47.999 "name": "BaseBdev4", 00:15:47.999 "uuid": "528ad582-69c7-4709-bd39-79830d22f1b8", 00:15:47.999 "is_configured": true, 00:15:47.999 "data_offset": 0, 00:15:47.999 "data_size": 65536 00:15:47.999 } 00:15:47.999 ] 00:15:47.999 }' 00:15:47.999 12:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.999 12:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.566 12:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.566 12:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:48.566 12:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.566 12:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.566 12:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.566 12:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:48.566 12:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:48.566 12:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.566 12:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.566 [2024-11-25 12:14:44.444118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:48.566 BaseBdev1 00:15:48.566 12:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.566 12:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:48.566 12:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:48.566 12:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:48.566 12:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:48.566 12:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:48.566 12:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:48.566 12:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:48.566 12:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.566 12:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.566 12:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.566 12:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:48.566 12:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.566 12:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.566 [ 00:15:48.566 { 00:15:48.566 "name": "BaseBdev1", 00:15:48.566 "aliases": [ 00:15:48.566 "44679464-d172-4507-901e-bb82c481c6b9" 00:15:48.566 ], 00:15:48.566 "product_name": "Malloc disk", 00:15:48.566 "block_size": 512, 00:15:48.566 "num_blocks": 65536, 00:15:48.566 "uuid": "44679464-d172-4507-901e-bb82c481c6b9", 00:15:48.566 "assigned_rate_limits": { 00:15:48.566 "rw_ios_per_sec": 0, 00:15:48.566 "rw_mbytes_per_sec": 0, 00:15:48.566 "r_mbytes_per_sec": 0, 00:15:48.566 "w_mbytes_per_sec": 0 00:15:48.566 }, 00:15:48.566 "claimed": true, 00:15:48.566 "claim_type": "exclusive_write", 00:15:48.566 "zoned": false, 00:15:48.566 "supported_io_types": { 00:15:48.566 "read": true, 00:15:48.566 "write": true, 00:15:48.566 "unmap": true, 00:15:48.566 "flush": true, 00:15:48.566 "reset": true, 00:15:48.566 "nvme_admin": false, 00:15:48.566 "nvme_io": false, 00:15:48.566 "nvme_io_md": false, 00:15:48.566 "write_zeroes": true, 00:15:48.566 "zcopy": true, 00:15:48.566 "get_zone_info": false, 00:15:48.566 "zone_management": false, 00:15:48.566 "zone_append": false, 00:15:48.566 "compare": false, 00:15:48.566 "compare_and_write": false, 00:15:48.566 "abort": true, 00:15:48.566 "seek_hole": false, 00:15:48.566 "seek_data": false, 00:15:48.566 "copy": true, 00:15:48.566 "nvme_iov_md": false 00:15:48.566 }, 00:15:48.566 "memory_domains": [ 00:15:48.566 { 00:15:48.566 "dma_device_id": "system", 00:15:48.566 "dma_device_type": 1 00:15:48.566 }, 00:15:48.566 { 00:15:48.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:48.566 "dma_device_type": 2 00:15:48.566 } 00:15:48.566 ], 00:15:48.566 "driver_specific": {} 00:15:48.566 } 00:15:48.566 ] 00:15:48.566 12:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.566 12:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:48.566 12:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:48.566 12:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:48.566 12:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:48.566 12:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:48.566 12:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:48.566 12:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:48.566 12:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.566 12:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.566 12:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.566 12:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.566 12:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:48.566 12:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.566 12:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.566 12:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.566 12:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.566 12:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.566 "name": "Existed_Raid", 00:15:48.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.566 "strip_size_kb": 64, 00:15:48.566 "state": "configuring", 00:15:48.566 "raid_level": "concat", 00:15:48.566 "superblock": false, 00:15:48.566 "num_base_bdevs": 4, 00:15:48.566 "num_base_bdevs_discovered": 3, 00:15:48.566 "num_base_bdevs_operational": 4, 00:15:48.566 "base_bdevs_list": [ 00:15:48.566 { 00:15:48.566 "name": "BaseBdev1", 00:15:48.566 "uuid": "44679464-d172-4507-901e-bb82c481c6b9", 00:15:48.566 "is_configured": true, 00:15:48.566 "data_offset": 0, 00:15:48.566 "data_size": 65536 00:15:48.566 }, 00:15:48.566 { 00:15:48.567 "name": null, 00:15:48.567 "uuid": "1bb83140-5cb5-4f64-8da3-7d64e9dec69f", 00:15:48.567 "is_configured": false, 00:15:48.567 "data_offset": 0, 00:15:48.567 "data_size": 65536 00:15:48.567 }, 00:15:48.567 { 00:15:48.567 "name": "BaseBdev3", 00:15:48.567 "uuid": "23f792b8-952d-46c1-8c29-7df32b3efcc5", 00:15:48.567 "is_configured": true, 00:15:48.567 "data_offset": 0, 00:15:48.567 "data_size": 65536 00:15:48.567 }, 00:15:48.567 { 00:15:48.567 "name": "BaseBdev4", 00:15:48.567 "uuid": "528ad582-69c7-4709-bd39-79830d22f1b8", 00:15:48.567 "is_configured": true, 00:15:48.567 "data_offset": 0, 00:15:48.567 "data_size": 65536 00:15:48.567 } 00:15:48.567 ] 00:15:48.567 }' 00:15:48.567 12:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.567 12:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.134 12:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:49.134 12:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.134 12:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.134 12:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.134 12:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.134 12:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:49.134 12:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:49.134 12:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.134 12:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.134 [2024-11-25 12:14:45.040444] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:49.134 12:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.134 12:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:49.134 12:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:49.134 12:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:49.134 12:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:49.134 12:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:49.134 12:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:49.134 12:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.134 12:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.134 12:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.134 12:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.134 12:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.134 12:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.134 12:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:49.134 12:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.134 12:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.134 12:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.134 "name": "Existed_Raid", 00:15:49.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.134 "strip_size_kb": 64, 00:15:49.134 "state": "configuring", 00:15:49.134 "raid_level": "concat", 00:15:49.134 "superblock": false, 00:15:49.134 "num_base_bdevs": 4, 00:15:49.134 "num_base_bdevs_discovered": 2, 00:15:49.134 "num_base_bdevs_operational": 4, 00:15:49.134 "base_bdevs_list": [ 00:15:49.134 { 00:15:49.134 "name": "BaseBdev1", 00:15:49.134 "uuid": "44679464-d172-4507-901e-bb82c481c6b9", 00:15:49.134 "is_configured": true, 00:15:49.134 "data_offset": 0, 00:15:49.134 "data_size": 65536 00:15:49.134 }, 00:15:49.134 { 00:15:49.134 "name": null, 00:15:49.134 "uuid": "1bb83140-5cb5-4f64-8da3-7d64e9dec69f", 00:15:49.134 "is_configured": false, 00:15:49.134 "data_offset": 0, 00:15:49.134 "data_size": 65536 00:15:49.134 }, 00:15:49.134 { 00:15:49.134 "name": null, 00:15:49.134 "uuid": "23f792b8-952d-46c1-8c29-7df32b3efcc5", 00:15:49.134 "is_configured": false, 00:15:49.134 "data_offset": 0, 00:15:49.134 "data_size": 65536 00:15:49.134 }, 00:15:49.134 { 00:15:49.134 "name": "BaseBdev4", 00:15:49.134 "uuid": "528ad582-69c7-4709-bd39-79830d22f1b8", 00:15:49.134 "is_configured": true, 00:15:49.134 "data_offset": 0, 00:15:49.134 "data_size": 65536 00:15:49.134 } 00:15:49.134 ] 00:15:49.134 }' 00:15:49.134 12:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.134 12:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.772 12:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.772 12:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:49.772 12:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.772 12:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.772 12:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.772 12:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:49.772 12:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:49.772 12:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.772 12:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.772 [2024-11-25 12:14:45.580551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:49.772 12:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.772 12:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:49.772 12:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:49.772 12:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:49.772 12:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:49.772 12:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:49.772 12:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:49.772 12:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.772 12:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.772 12:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.772 12:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.772 12:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.772 12:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.772 12:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.772 12:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:49.772 12:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.772 12:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.772 "name": "Existed_Raid", 00:15:49.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.772 "strip_size_kb": 64, 00:15:49.772 "state": "configuring", 00:15:49.772 "raid_level": "concat", 00:15:49.772 "superblock": false, 00:15:49.772 "num_base_bdevs": 4, 00:15:49.772 "num_base_bdevs_discovered": 3, 00:15:49.772 "num_base_bdevs_operational": 4, 00:15:49.772 "base_bdevs_list": [ 00:15:49.772 { 00:15:49.772 "name": "BaseBdev1", 00:15:49.772 "uuid": "44679464-d172-4507-901e-bb82c481c6b9", 00:15:49.772 "is_configured": true, 00:15:49.772 "data_offset": 0, 00:15:49.772 "data_size": 65536 00:15:49.772 }, 00:15:49.772 { 00:15:49.772 "name": null, 00:15:49.772 "uuid": "1bb83140-5cb5-4f64-8da3-7d64e9dec69f", 00:15:49.772 "is_configured": false, 00:15:49.772 "data_offset": 0, 00:15:49.772 "data_size": 65536 00:15:49.772 }, 00:15:49.772 { 00:15:49.772 "name": "BaseBdev3", 00:15:49.772 "uuid": "23f792b8-952d-46c1-8c29-7df32b3efcc5", 00:15:49.772 "is_configured": true, 00:15:49.772 "data_offset": 0, 00:15:49.772 "data_size": 65536 00:15:49.772 }, 00:15:49.772 { 00:15:49.772 "name": "BaseBdev4", 00:15:49.772 "uuid": "528ad582-69c7-4709-bd39-79830d22f1b8", 00:15:49.772 "is_configured": true, 00:15:49.772 "data_offset": 0, 00:15:49.772 "data_size": 65536 00:15:49.772 } 00:15:49.772 ] 00:15:49.772 }' 00:15:49.772 12:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.772 12:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.031 12:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:50.031 12:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.031 12:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.031 12:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.031 12:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.032 12:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:50.032 12:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:50.032 12:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.032 12:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.032 [2024-11-25 12:14:46.108804] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:50.290 12:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.290 12:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:50.290 12:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:50.290 12:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:50.290 12:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:50.290 12:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:50.290 12:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:50.290 12:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.290 12:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.290 12:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.290 12:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.290 12:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.290 12:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.290 12:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.290 12:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.290 12:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.290 12:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.290 "name": "Existed_Raid", 00:15:50.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.290 "strip_size_kb": 64, 00:15:50.290 "state": "configuring", 00:15:50.290 "raid_level": "concat", 00:15:50.290 "superblock": false, 00:15:50.290 "num_base_bdevs": 4, 00:15:50.290 "num_base_bdevs_discovered": 2, 00:15:50.290 "num_base_bdevs_operational": 4, 00:15:50.290 "base_bdevs_list": [ 00:15:50.290 { 00:15:50.290 "name": null, 00:15:50.290 "uuid": "44679464-d172-4507-901e-bb82c481c6b9", 00:15:50.290 "is_configured": false, 00:15:50.290 "data_offset": 0, 00:15:50.290 "data_size": 65536 00:15:50.290 }, 00:15:50.290 { 00:15:50.290 "name": null, 00:15:50.290 "uuid": "1bb83140-5cb5-4f64-8da3-7d64e9dec69f", 00:15:50.290 "is_configured": false, 00:15:50.290 "data_offset": 0, 00:15:50.290 "data_size": 65536 00:15:50.290 }, 00:15:50.290 { 00:15:50.290 "name": "BaseBdev3", 00:15:50.290 "uuid": "23f792b8-952d-46c1-8c29-7df32b3efcc5", 00:15:50.290 "is_configured": true, 00:15:50.290 "data_offset": 0, 00:15:50.290 "data_size": 65536 00:15:50.290 }, 00:15:50.290 { 00:15:50.290 "name": "BaseBdev4", 00:15:50.290 "uuid": "528ad582-69c7-4709-bd39-79830d22f1b8", 00:15:50.290 "is_configured": true, 00:15:50.290 "data_offset": 0, 00:15:50.290 "data_size": 65536 00:15:50.290 } 00:15:50.290 ] 00:15:50.290 }' 00:15:50.290 12:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.290 12:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.858 12:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.858 12:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.858 12:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:50.858 12:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.858 12:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.858 12:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:50.858 12:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:50.858 12:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.858 12:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.858 [2024-11-25 12:14:46.700270] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:50.858 12:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.858 12:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:50.858 12:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:50.858 12:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:50.858 12:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:50.858 12:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:50.858 12:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:50.858 12:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.858 12:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.858 12:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.858 12:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.858 12:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.858 12:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.858 12:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.858 12:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.858 12:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.858 12:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.858 "name": "Existed_Raid", 00:15:50.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.858 "strip_size_kb": 64, 00:15:50.858 "state": "configuring", 00:15:50.858 "raid_level": "concat", 00:15:50.858 "superblock": false, 00:15:50.858 "num_base_bdevs": 4, 00:15:50.858 "num_base_bdevs_discovered": 3, 00:15:50.858 "num_base_bdevs_operational": 4, 00:15:50.858 "base_bdevs_list": [ 00:15:50.858 { 00:15:50.858 "name": null, 00:15:50.858 "uuid": "44679464-d172-4507-901e-bb82c481c6b9", 00:15:50.858 "is_configured": false, 00:15:50.858 "data_offset": 0, 00:15:50.858 "data_size": 65536 00:15:50.858 }, 00:15:50.858 { 00:15:50.858 "name": "BaseBdev2", 00:15:50.858 "uuid": "1bb83140-5cb5-4f64-8da3-7d64e9dec69f", 00:15:50.858 "is_configured": true, 00:15:50.858 "data_offset": 0, 00:15:50.858 "data_size": 65536 00:15:50.858 }, 00:15:50.858 { 00:15:50.858 "name": "BaseBdev3", 00:15:50.858 "uuid": "23f792b8-952d-46c1-8c29-7df32b3efcc5", 00:15:50.858 "is_configured": true, 00:15:50.858 "data_offset": 0, 00:15:50.859 "data_size": 65536 00:15:50.859 }, 00:15:50.859 { 00:15:50.859 "name": "BaseBdev4", 00:15:50.859 "uuid": "528ad582-69c7-4709-bd39-79830d22f1b8", 00:15:50.859 "is_configured": true, 00:15:50.859 "data_offset": 0, 00:15:50.859 "data_size": 65536 00:15:50.859 } 00:15:50.859 ] 00:15:50.859 }' 00:15:50.859 12:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.859 12:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.427 12:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.427 12:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:51.427 12:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.427 12:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.427 12:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.427 12:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:51.427 12:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:51.427 12:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.427 12:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.427 12:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.427 12:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.427 12:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 44679464-d172-4507-901e-bb82c481c6b9 00:15:51.427 12:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.427 12:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.427 [2024-11-25 12:14:47.354874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:51.427 [2024-11-25 12:14:47.354961] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:51.427 [2024-11-25 12:14:47.354974] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:15:51.427 [2024-11-25 12:14:47.355341] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:51.427 [2024-11-25 12:14:47.355564] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:51.427 [2024-11-25 12:14:47.355587] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:51.427 [2024-11-25 12:14:47.355919] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:51.427 NewBaseBdev 00:15:51.427 12:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.427 12:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:51.427 12:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:51.428 12:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:51.428 12:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:51.428 12:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:51.428 12:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:51.428 12:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:51.428 12:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.428 12:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.428 12:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.428 12:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:51.428 12:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.428 12:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.428 [ 00:15:51.428 { 00:15:51.428 "name": "NewBaseBdev", 00:15:51.428 "aliases": [ 00:15:51.428 "44679464-d172-4507-901e-bb82c481c6b9" 00:15:51.428 ], 00:15:51.428 "product_name": "Malloc disk", 00:15:51.428 "block_size": 512, 00:15:51.428 "num_blocks": 65536, 00:15:51.428 "uuid": "44679464-d172-4507-901e-bb82c481c6b9", 00:15:51.428 "assigned_rate_limits": { 00:15:51.428 "rw_ios_per_sec": 0, 00:15:51.428 "rw_mbytes_per_sec": 0, 00:15:51.428 "r_mbytes_per_sec": 0, 00:15:51.428 "w_mbytes_per_sec": 0 00:15:51.428 }, 00:15:51.428 "claimed": true, 00:15:51.428 "claim_type": "exclusive_write", 00:15:51.428 "zoned": false, 00:15:51.428 "supported_io_types": { 00:15:51.428 "read": true, 00:15:51.428 "write": true, 00:15:51.428 "unmap": true, 00:15:51.428 "flush": true, 00:15:51.428 "reset": true, 00:15:51.428 "nvme_admin": false, 00:15:51.428 "nvme_io": false, 00:15:51.428 "nvme_io_md": false, 00:15:51.428 "write_zeroes": true, 00:15:51.428 "zcopy": true, 00:15:51.428 "get_zone_info": false, 00:15:51.428 "zone_management": false, 00:15:51.428 "zone_append": false, 00:15:51.428 "compare": false, 00:15:51.428 "compare_and_write": false, 00:15:51.428 "abort": true, 00:15:51.428 "seek_hole": false, 00:15:51.428 "seek_data": false, 00:15:51.428 "copy": true, 00:15:51.428 "nvme_iov_md": false 00:15:51.428 }, 00:15:51.428 "memory_domains": [ 00:15:51.428 { 00:15:51.428 "dma_device_id": "system", 00:15:51.428 "dma_device_type": 1 00:15:51.428 }, 00:15:51.428 { 00:15:51.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.428 "dma_device_type": 2 00:15:51.428 } 00:15:51.428 ], 00:15:51.428 "driver_specific": {} 00:15:51.428 } 00:15:51.428 ] 00:15:51.428 12:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.428 12:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:51.428 12:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:15:51.428 12:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:51.428 12:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:51.428 12:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:51.428 12:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.428 12:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:51.428 12:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.428 12:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.428 12:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.428 12:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.428 12:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.428 12:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.428 12:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.428 12:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.428 12:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.428 12:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.428 "name": "Existed_Raid", 00:15:51.428 "uuid": "8f0c70d5-e252-4882-bb07-15b5e92499bf", 00:15:51.428 "strip_size_kb": 64, 00:15:51.428 "state": "online", 00:15:51.428 "raid_level": "concat", 00:15:51.428 "superblock": false, 00:15:51.428 "num_base_bdevs": 4, 00:15:51.428 "num_base_bdevs_discovered": 4, 00:15:51.428 "num_base_bdevs_operational": 4, 00:15:51.428 "base_bdevs_list": [ 00:15:51.428 { 00:15:51.428 "name": "NewBaseBdev", 00:15:51.428 "uuid": "44679464-d172-4507-901e-bb82c481c6b9", 00:15:51.428 "is_configured": true, 00:15:51.428 "data_offset": 0, 00:15:51.428 "data_size": 65536 00:15:51.428 }, 00:15:51.428 { 00:15:51.428 "name": "BaseBdev2", 00:15:51.428 "uuid": "1bb83140-5cb5-4f64-8da3-7d64e9dec69f", 00:15:51.428 "is_configured": true, 00:15:51.428 "data_offset": 0, 00:15:51.428 "data_size": 65536 00:15:51.428 }, 00:15:51.428 { 00:15:51.428 "name": "BaseBdev3", 00:15:51.428 "uuid": "23f792b8-952d-46c1-8c29-7df32b3efcc5", 00:15:51.428 "is_configured": true, 00:15:51.428 "data_offset": 0, 00:15:51.428 "data_size": 65536 00:15:51.428 }, 00:15:51.428 { 00:15:51.428 "name": "BaseBdev4", 00:15:51.428 "uuid": "528ad582-69c7-4709-bd39-79830d22f1b8", 00:15:51.428 "is_configured": true, 00:15:51.428 "data_offset": 0, 00:15:51.428 "data_size": 65536 00:15:51.428 } 00:15:51.428 ] 00:15:51.428 }' 00:15:51.428 12:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.428 12:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.996 12:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:51.996 12:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:51.996 12:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:51.996 12:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:51.996 12:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:51.996 12:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:51.996 12:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:51.996 12:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.996 12:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.996 12:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:51.996 [2024-11-25 12:14:47.891723] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:51.996 12:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.996 12:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:51.996 "name": "Existed_Raid", 00:15:51.996 "aliases": [ 00:15:51.996 "8f0c70d5-e252-4882-bb07-15b5e92499bf" 00:15:51.996 ], 00:15:51.996 "product_name": "Raid Volume", 00:15:51.996 "block_size": 512, 00:15:51.996 "num_blocks": 262144, 00:15:51.996 "uuid": "8f0c70d5-e252-4882-bb07-15b5e92499bf", 00:15:51.996 "assigned_rate_limits": { 00:15:51.996 "rw_ios_per_sec": 0, 00:15:51.996 "rw_mbytes_per_sec": 0, 00:15:51.996 "r_mbytes_per_sec": 0, 00:15:51.996 "w_mbytes_per_sec": 0 00:15:51.996 }, 00:15:51.996 "claimed": false, 00:15:51.996 "zoned": false, 00:15:51.996 "supported_io_types": { 00:15:51.996 "read": true, 00:15:51.996 "write": true, 00:15:51.996 "unmap": true, 00:15:51.996 "flush": true, 00:15:51.996 "reset": true, 00:15:51.996 "nvme_admin": false, 00:15:51.996 "nvme_io": false, 00:15:51.996 "nvme_io_md": false, 00:15:51.996 "write_zeroes": true, 00:15:51.996 "zcopy": false, 00:15:51.996 "get_zone_info": false, 00:15:51.996 "zone_management": false, 00:15:51.996 "zone_append": false, 00:15:51.996 "compare": false, 00:15:51.996 "compare_and_write": false, 00:15:51.996 "abort": false, 00:15:51.996 "seek_hole": false, 00:15:51.996 "seek_data": false, 00:15:51.996 "copy": false, 00:15:51.996 "nvme_iov_md": false 00:15:51.996 }, 00:15:51.996 "memory_domains": [ 00:15:51.996 { 00:15:51.996 "dma_device_id": "system", 00:15:51.996 "dma_device_type": 1 00:15:51.996 }, 00:15:51.996 { 00:15:51.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.996 "dma_device_type": 2 00:15:51.996 }, 00:15:51.996 { 00:15:51.996 "dma_device_id": "system", 00:15:51.996 "dma_device_type": 1 00:15:51.996 }, 00:15:51.996 { 00:15:51.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.996 "dma_device_type": 2 00:15:51.996 }, 00:15:51.996 { 00:15:51.996 "dma_device_id": "system", 00:15:51.996 "dma_device_type": 1 00:15:51.996 }, 00:15:51.996 { 00:15:51.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.996 "dma_device_type": 2 00:15:51.996 }, 00:15:51.996 { 00:15:51.996 "dma_device_id": "system", 00:15:51.996 "dma_device_type": 1 00:15:51.996 }, 00:15:51.996 { 00:15:51.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.996 "dma_device_type": 2 00:15:51.996 } 00:15:51.996 ], 00:15:51.996 "driver_specific": { 00:15:51.996 "raid": { 00:15:51.996 "uuid": "8f0c70d5-e252-4882-bb07-15b5e92499bf", 00:15:51.996 "strip_size_kb": 64, 00:15:51.996 "state": "online", 00:15:51.996 "raid_level": "concat", 00:15:51.996 "superblock": false, 00:15:51.996 "num_base_bdevs": 4, 00:15:51.996 "num_base_bdevs_discovered": 4, 00:15:51.996 "num_base_bdevs_operational": 4, 00:15:51.996 "base_bdevs_list": [ 00:15:51.996 { 00:15:51.996 "name": "NewBaseBdev", 00:15:51.996 "uuid": "44679464-d172-4507-901e-bb82c481c6b9", 00:15:51.996 "is_configured": true, 00:15:51.996 "data_offset": 0, 00:15:51.996 "data_size": 65536 00:15:51.996 }, 00:15:51.996 { 00:15:51.996 "name": "BaseBdev2", 00:15:51.996 "uuid": "1bb83140-5cb5-4f64-8da3-7d64e9dec69f", 00:15:51.996 "is_configured": true, 00:15:51.996 "data_offset": 0, 00:15:51.996 "data_size": 65536 00:15:51.996 }, 00:15:51.996 { 00:15:51.996 "name": "BaseBdev3", 00:15:51.996 "uuid": "23f792b8-952d-46c1-8c29-7df32b3efcc5", 00:15:51.996 "is_configured": true, 00:15:51.996 "data_offset": 0, 00:15:51.996 "data_size": 65536 00:15:51.996 }, 00:15:51.996 { 00:15:51.996 "name": "BaseBdev4", 00:15:51.996 "uuid": "528ad582-69c7-4709-bd39-79830d22f1b8", 00:15:51.996 "is_configured": true, 00:15:51.996 "data_offset": 0, 00:15:51.996 "data_size": 65536 00:15:51.996 } 00:15:51.996 ] 00:15:51.996 } 00:15:51.996 } 00:15:51.996 }' 00:15:51.997 12:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:51.997 12:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:51.997 BaseBdev2 00:15:51.997 BaseBdev3 00:15:51.997 BaseBdev4' 00:15:51.997 12:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:51.997 12:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:51.997 12:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:51.997 12:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:51.997 12:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.997 12:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:51.997 12:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.997 12:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.256 12:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:52.256 12:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:52.256 12:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:52.256 12:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:52.256 12:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:52.256 12:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.256 12:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.256 12:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.256 12:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:52.256 12:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:52.256 12:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:52.256 12:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:52.256 12:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.256 12:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:52.256 12:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.256 12:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.256 12:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:52.256 12:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:52.256 12:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:52.256 12:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:52.256 12:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:52.256 12:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.256 12:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.256 12:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.256 12:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:52.256 12:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:52.256 12:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:52.256 12:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.256 12:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.256 [2024-11-25 12:14:48.279324] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:52.256 [2024-11-25 12:14:48.279452] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:52.256 [2024-11-25 12:14:48.279575] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:52.256 [2024-11-25 12:14:48.279697] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:52.256 [2024-11-25 12:14:48.279726] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:52.256 12:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.256 12:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71398 00:15:52.256 12:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71398 ']' 00:15:52.256 12:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71398 00:15:52.256 12:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:15:52.256 12:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:52.256 12:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71398 00:15:52.256 killing process with pid 71398 00:15:52.256 12:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:52.256 12:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:52.256 12:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71398' 00:15:52.256 12:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71398 00:15:52.257 [2024-11-25 12:14:48.319628] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:52.257 12:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71398 00:15:52.824 [2024-11-25 12:14:48.711869] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:53.802 12:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:53.802 00:15:53.802 real 0m12.680s 00:15:53.802 user 0m20.776s 00:15:53.802 sys 0m1.762s 00:15:53.802 12:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:53.802 12:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.802 ************************************ 00:15:53.802 END TEST raid_state_function_test 00:15:53.802 ************************************ 00:15:54.061 12:14:49 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:15:54.061 12:14:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:54.061 12:14:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:54.061 12:14:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:54.061 ************************************ 00:15:54.061 START TEST raid_state_function_test_sb 00:15:54.061 ************************************ 00:15:54.061 12:14:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:15:54.061 12:14:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:15:54.061 12:14:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:54.061 12:14:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:54.061 12:14:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:54.061 12:14:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:54.061 12:14:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:54.061 12:14:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:54.061 12:14:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:54.061 12:14:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:54.061 12:14:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:54.061 12:14:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:54.061 12:14:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:54.061 12:14:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:54.061 12:14:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:54.061 12:14:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:54.061 12:14:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:54.061 12:14:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:54.061 12:14:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:54.061 12:14:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:54.061 12:14:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:54.061 12:14:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:54.061 12:14:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:54.061 12:14:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:54.061 12:14:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:54.061 12:14:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:15:54.061 12:14:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:54.061 12:14:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:54.061 12:14:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:54.061 12:14:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:54.061 12:14:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72082 00:15:54.061 12:14:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:54.061 Process raid pid: 72082 00:15:54.061 12:14:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72082' 00:15:54.061 12:14:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72082 00:15:54.061 12:14:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 72082 ']' 00:15:54.061 12:14:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:54.061 12:14:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:54.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:54.061 12:14:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:54.061 12:14:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:54.061 12:14:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.061 [2024-11-25 12:14:50.046647] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:15:54.061 [2024-11-25 12:14:50.046929] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:54.320 [2024-11-25 12:14:50.245244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.320 [2024-11-25 12:14:50.404150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.579 [2024-11-25 12:14:50.637100] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:54.579 [2024-11-25 12:14:50.637175] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:55.148 12:14:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:55.148 12:14:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:55.148 12:14:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:55.148 12:14:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.148 12:14:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.148 [2024-11-25 12:14:51.029158] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:55.148 [2024-11-25 12:14:51.029259] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:55.148 [2024-11-25 12:14:51.029277] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:55.148 [2024-11-25 12:14:51.029294] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:55.148 [2024-11-25 12:14:51.029303] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:55.148 [2024-11-25 12:14:51.029317] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:55.148 [2024-11-25 12:14:51.029327] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:55.148 [2024-11-25 12:14:51.029359] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:55.148 12:14:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.148 12:14:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:55.148 12:14:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:55.148 12:14:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:55.148 12:14:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:55.148 12:14:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:55.148 12:14:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:55.148 12:14:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.148 12:14:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.148 12:14:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.148 12:14:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.148 12:14:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.148 12:14:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.148 12:14:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.148 12:14:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:55.148 12:14:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.148 12:14:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.148 "name": "Existed_Raid", 00:15:55.148 "uuid": "0b938b23-e043-4581-b0d9-b315c05b9ed0", 00:15:55.148 "strip_size_kb": 64, 00:15:55.148 "state": "configuring", 00:15:55.148 "raid_level": "concat", 00:15:55.148 "superblock": true, 00:15:55.148 "num_base_bdevs": 4, 00:15:55.148 "num_base_bdevs_discovered": 0, 00:15:55.148 "num_base_bdevs_operational": 4, 00:15:55.148 "base_bdevs_list": [ 00:15:55.148 { 00:15:55.148 "name": "BaseBdev1", 00:15:55.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.148 "is_configured": false, 00:15:55.148 "data_offset": 0, 00:15:55.148 "data_size": 0 00:15:55.148 }, 00:15:55.148 { 00:15:55.148 "name": "BaseBdev2", 00:15:55.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.148 "is_configured": false, 00:15:55.148 "data_offset": 0, 00:15:55.148 "data_size": 0 00:15:55.148 }, 00:15:55.148 { 00:15:55.148 "name": "BaseBdev3", 00:15:55.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.148 "is_configured": false, 00:15:55.148 "data_offset": 0, 00:15:55.148 "data_size": 0 00:15:55.148 }, 00:15:55.148 { 00:15:55.148 "name": "BaseBdev4", 00:15:55.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.148 "is_configured": false, 00:15:55.148 "data_offset": 0, 00:15:55.148 "data_size": 0 00:15:55.148 } 00:15:55.148 ] 00:15:55.148 }' 00:15:55.148 12:14:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.148 12:14:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.717 12:14:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:55.717 12:14:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.717 12:14:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.717 [2024-11-25 12:14:51.553299] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:55.717 [2024-11-25 12:14:51.553398] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:55.717 12:14:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.717 12:14:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:55.717 12:14:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.717 12:14:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.717 [2024-11-25 12:14:51.561201] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:55.717 [2024-11-25 12:14:51.561259] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:55.717 [2024-11-25 12:14:51.561274] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:55.717 [2024-11-25 12:14:51.561289] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:55.717 [2024-11-25 12:14:51.561299] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:55.717 [2024-11-25 12:14:51.561314] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:55.717 [2024-11-25 12:14:51.561323] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:55.717 [2024-11-25 12:14:51.561351] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:55.717 12:14:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.717 12:14:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:55.717 12:14:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.717 12:14:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.717 [2024-11-25 12:14:51.612211] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:55.717 BaseBdev1 00:15:55.717 12:14:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.717 12:14:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:55.717 12:14:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:55.717 12:14:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:55.717 12:14:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:55.717 12:14:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:55.717 12:14:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:55.717 12:14:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:55.717 12:14:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.717 12:14:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.717 12:14:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.717 12:14:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:55.717 12:14:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.717 12:14:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.717 [ 00:15:55.717 { 00:15:55.717 "name": "BaseBdev1", 00:15:55.717 "aliases": [ 00:15:55.717 "b74d63bb-c7eb-4c97-b384-32ae7b97fcba" 00:15:55.717 ], 00:15:55.717 "product_name": "Malloc disk", 00:15:55.717 "block_size": 512, 00:15:55.717 "num_blocks": 65536, 00:15:55.717 "uuid": "b74d63bb-c7eb-4c97-b384-32ae7b97fcba", 00:15:55.717 "assigned_rate_limits": { 00:15:55.717 "rw_ios_per_sec": 0, 00:15:55.717 "rw_mbytes_per_sec": 0, 00:15:55.717 "r_mbytes_per_sec": 0, 00:15:55.717 "w_mbytes_per_sec": 0 00:15:55.717 }, 00:15:55.717 "claimed": true, 00:15:55.717 "claim_type": "exclusive_write", 00:15:55.717 "zoned": false, 00:15:55.717 "supported_io_types": { 00:15:55.717 "read": true, 00:15:55.717 "write": true, 00:15:55.717 "unmap": true, 00:15:55.717 "flush": true, 00:15:55.717 "reset": true, 00:15:55.717 "nvme_admin": false, 00:15:55.717 "nvme_io": false, 00:15:55.717 "nvme_io_md": false, 00:15:55.717 "write_zeroes": true, 00:15:55.717 "zcopy": true, 00:15:55.717 "get_zone_info": false, 00:15:55.717 "zone_management": false, 00:15:55.717 "zone_append": false, 00:15:55.717 "compare": false, 00:15:55.717 "compare_and_write": false, 00:15:55.717 "abort": true, 00:15:55.717 "seek_hole": false, 00:15:55.717 "seek_data": false, 00:15:55.717 "copy": true, 00:15:55.717 "nvme_iov_md": false 00:15:55.717 }, 00:15:55.717 "memory_domains": [ 00:15:55.717 { 00:15:55.717 "dma_device_id": "system", 00:15:55.717 "dma_device_type": 1 00:15:55.717 }, 00:15:55.717 { 00:15:55.717 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:55.717 "dma_device_type": 2 00:15:55.717 } 00:15:55.717 ], 00:15:55.717 "driver_specific": {} 00:15:55.717 } 00:15:55.717 ] 00:15:55.717 12:14:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.717 12:14:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:55.717 12:14:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:55.717 12:14:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:55.717 12:14:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:55.717 12:14:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:55.717 12:14:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:55.717 12:14:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:55.717 12:14:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.717 12:14:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.717 12:14:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.717 12:14:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.717 12:14:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.717 12:14:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:55.717 12:14:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.717 12:14:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.717 12:14:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.717 12:14:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.717 "name": "Existed_Raid", 00:15:55.717 "uuid": "e18e5a53-9165-4256-9a7c-3bd5a443c351", 00:15:55.717 "strip_size_kb": 64, 00:15:55.717 "state": "configuring", 00:15:55.717 "raid_level": "concat", 00:15:55.717 "superblock": true, 00:15:55.717 "num_base_bdevs": 4, 00:15:55.717 "num_base_bdevs_discovered": 1, 00:15:55.717 "num_base_bdevs_operational": 4, 00:15:55.717 "base_bdevs_list": [ 00:15:55.717 { 00:15:55.717 "name": "BaseBdev1", 00:15:55.717 "uuid": "b74d63bb-c7eb-4c97-b384-32ae7b97fcba", 00:15:55.717 "is_configured": true, 00:15:55.717 "data_offset": 2048, 00:15:55.717 "data_size": 63488 00:15:55.717 }, 00:15:55.717 { 00:15:55.717 "name": "BaseBdev2", 00:15:55.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.717 "is_configured": false, 00:15:55.717 "data_offset": 0, 00:15:55.717 "data_size": 0 00:15:55.717 }, 00:15:55.717 { 00:15:55.717 "name": "BaseBdev3", 00:15:55.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.717 "is_configured": false, 00:15:55.718 "data_offset": 0, 00:15:55.718 "data_size": 0 00:15:55.718 }, 00:15:55.718 { 00:15:55.718 "name": "BaseBdev4", 00:15:55.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.718 "is_configured": false, 00:15:55.718 "data_offset": 0, 00:15:55.718 "data_size": 0 00:15:55.718 } 00:15:55.718 ] 00:15:55.718 }' 00:15:55.718 12:14:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.718 12:14:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.286 12:14:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:56.286 12:14:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.286 12:14:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.286 [2024-11-25 12:14:52.140492] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:56.286 [2024-11-25 12:14:52.140608] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:56.286 12:14:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.286 12:14:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:56.286 12:14:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.286 12:14:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.286 [2024-11-25 12:14:52.148569] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:56.286 [2024-11-25 12:14:52.151362] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:56.286 [2024-11-25 12:14:52.151443] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:56.286 [2024-11-25 12:14:52.151461] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:56.286 [2024-11-25 12:14:52.151479] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:56.286 [2024-11-25 12:14:52.151490] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:56.286 [2024-11-25 12:14:52.151503] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:56.286 12:14:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.286 12:14:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:56.286 12:14:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:56.286 12:14:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:56.286 12:14:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:56.286 12:14:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:56.286 12:14:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:56.286 12:14:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:56.286 12:14:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:56.286 12:14:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.286 12:14:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.286 12:14:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.286 12:14:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.286 12:14:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.286 12:14:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:56.286 12:14:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.286 12:14:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.286 12:14:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.286 12:14:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.286 "name": "Existed_Raid", 00:15:56.286 "uuid": "2229b9e6-2047-4ca4-82f7-51930d82920d", 00:15:56.286 "strip_size_kb": 64, 00:15:56.286 "state": "configuring", 00:15:56.286 "raid_level": "concat", 00:15:56.286 "superblock": true, 00:15:56.286 "num_base_bdevs": 4, 00:15:56.286 "num_base_bdevs_discovered": 1, 00:15:56.286 "num_base_bdevs_operational": 4, 00:15:56.286 "base_bdevs_list": [ 00:15:56.286 { 00:15:56.286 "name": "BaseBdev1", 00:15:56.286 "uuid": "b74d63bb-c7eb-4c97-b384-32ae7b97fcba", 00:15:56.286 "is_configured": true, 00:15:56.286 "data_offset": 2048, 00:15:56.286 "data_size": 63488 00:15:56.286 }, 00:15:56.286 { 00:15:56.286 "name": "BaseBdev2", 00:15:56.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.286 "is_configured": false, 00:15:56.286 "data_offset": 0, 00:15:56.286 "data_size": 0 00:15:56.286 }, 00:15:56.286 { 00:15:56.286 "name": "BaseBdev3", 00:15:56.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.286 "is_configured": false, 00:15:56.286 "data_offset": 0, 00:15:56.286 "data_size": 0 00:15:56.286 }, 00:15:56.286 { 00:15:56.286 "name": "BaseBdev4", 00:15:56.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.286 "is_configured": false, 00:15:56.286 "data_offset": 0, 00:15:56.286 "data_size": 0 00:15:56.286 } 00:15:56.286 ] 00:15:56.286 }' 00:15:56.286 12:14:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.286 12:14:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.853 12:14:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:56.853 12:14:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.853 12:14:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.853 [2024-11-25 12:14:52.719208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:56.853 BaseBdev2 00:15:56.853 12:14:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.853 12:14:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:56.853 12:14:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:56.853 12:14:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:56.853 12:14:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:56.853 12:14:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:56.853 12:14:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:56.854 12:14:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:56.854 12:14:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.854 12:14:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.854 12:14:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.854 12:14:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:56.854 12:14:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.854 12:14:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.854 [ 00:15:56.854 { 00:15:56.854 "name": "BaseBdev2", 00:15:56.854 "aliases": [ 00:15:56.854 "2d218c52-339f-43aa-96e2-108cc4f2be56" 00:15:56.854 ], 00:15:56.854 "product_name": "Malloc disk", 00:15:56.854 "block_size": 512, 00:15:56.854 "num_blocks": 65536, 00:15:56.854 "uuid": "2d218c52-339f-43aa-96e2-108cc4f2be56", 00:15:56.854 "assigned_rate_limits": { 00:15:56.854 "rw_ios_per_sec": 0, 00:15:56.854 "rw_mbytes_per_sec": 0, 00:15:56.854 "r_mbytes_per_sec": 0, 00:15:56.854 "w_mbytes_per_sec": 0 00:15:56.854 }, 00:15:56.854 "claimed": true, 00:15:56.854 "claim_type": "exclusive_write", 00:15:56.854 "zoned": false, 00:15:56.854 "supported_io_types": { 00:15:56.854 "read": true, 00:15:56.854 "write": true, 00:15:56.854 "unmap": true, 00:15:56.854 "flush": true, 00:15:56.854 "reset": true, 00:15:56.854 "nvme_admin": false, 00:15:56.854 "nvme_io": false, 00:15:56.854 "nvme_io_md": false, 00:15:56.854 "write_zeroes": true, 00:15:56.854 "zcopy": true, 00:15:56.854 "get_zone_info": false, 00:15:56.854 "zone_management": false, 00:15:56.854 "zone_append": false, 00:15:56.854 "compare": false, 00:15:56.854 "compare_and_write": false, 00:15:56.854 "abort": true, 00:15:56.854 "seek_hole": false, 00:15:56.854 "seek_data": false, 00:15:56.854 "copy": true, 00:15:56.854 "nvme_iov_md": false 00:15:56.854 }, 00:15:56.854 "memory_domains": [ 00:15:56.854 { 00:15:56.854 "dma_device_id": "system", 00:15:56.854 "dma_device_type": 1 00:15:56.854 }, 00:15:56.854 { 00:15:56.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.854 "dma_device_type": 2 00:15:56.854 } 00:15:56.854 ], 00:15:56.854 "driver_specific": {} 00:15:56.854 } 00:15:56.854 ] 00:15:56.854 12:14:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.854 12:14:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:56.854 12:14:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:56.854 12:14:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:56.854 12:14:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:56.854 12:14:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:56.854 12:14:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:56.854 12:14:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:56.854 12:14:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:56.854 12:14:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:56.854 12:14:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.854 12:14:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.854 12:14:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.854 12:14:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.854 12:14:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.854 12:14:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:56.854 12:14:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.854 12:14:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.854 12:14:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.854 12:14:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.854 "name": "Existed_Raid", 00:15:56.854 "uuid": "2229b9e6-2047-4ca4-82f7-51930d82920d", 00:15:56.854 "strip_size_kb": 64, 00:15:56.854 "state": "configuring", 00:15:56.854 "raid_level": "concat", 00:15:56.854 "superblock": true, 00:15:56.854 "num_base_bdevs": 4, 00:15:56.854 "num_base_bdevs_discovered": 2, 00:15:56.854 "num_base_bdevs_operational": 4, 00:15:56.854 "base_bdevs_list": [ 00:15:56.854 { 00:15:56.854 "name": "BaseBdev1", 00:15:56.854 "uuid": "b74d63bb-c7eb-4c97-b384-32ae7b97fcba", 00:15:56.854 "is_configured": true, 00:15:56.854 "data_offset": 2048, 00:15:56.854 "data_size": 63488 00:15:56.854 }, 00:15:56.854 { 00:15:56.854 "name": "BaseBdev2", 00:15:56.854 "uuid": "2d218c52-339f-43aa-96e2-108cc4f2be56", 00:15:56.854 "is_configured": true, 00:15:56.854 "data_offset": 2048, 00:15:56.854 "data_size": 63488 00:15:56.854 }, 00:15:56.854 { 00:15:56.854 "name": "BaseBdev3", 00:15:56.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.854 "is_configured": false, 00:15:56.854 "data_offset": 0, 00:15:56.854 "data_size": 0 00:15:56.854 }, 00:15:56.854 { 00:15:56.854 "name": "BaseBdev4", 00:15:56.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.854 "is_configured": false, 00:15:56.854 "data_offset": 0, 00:15:56.854 "data_size": 0 00:15:56.854 } 00:15:56.854 ] 00:15:56.854 }' 00:15:56.854 12:14:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.854 12:14:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.479 12:14:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:57.479 12:14:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.479 12:14:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.479 [2024-11-25 12:14:53.292923] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:57.479 BaseBdev3 00:15:57.479 12:14:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.479 12:14:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:57.479 12:14:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:57.479 12:14:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:57.479 12:14:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:57.479 12:14:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:57.479 12:14:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:57.479 12:14:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:57.479 12:14:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.479 12:14:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.479 12:14:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.479 12:14:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:57.479 12:14:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.479 12:14:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.479 [ 00:15:57.479 { 00:15:57.479 "name": "BaseBdev3", 00:15:57.479 "aliases": [ 00:15:57.479 "594cb7cd-e4fb-442d-a939-cd235525d125" 00:15:57.479 ], 00:15:57.479 "product_name": "Malloc disk", 00:15:57.479 "block_size": 512, 00:15:57.479 "num_blocks": 65536, 00:15:57.479 "uuid": "594cb7cd-e4fb-442d-a939-cd235525d125", 00:15:57.479 "assigned_rate_limits": { 00:15:57.479 "rw_ios_per_sec": 0, 00:15:57.479 "rw_mbytes_per_sec": 0, 00:15:57.479 "r_mbytes_per_sec": 0, 00:15:57.479 "w_mbytes_per_sec": 0 00:15:57.479 }, 00:15:57.479 "claimed": true, 00:15:57.479 "claim_type": "exclusive_write", 00:15:57.479 "zoned": false, 00:15:57.479 "supported_io_types": { 00:15:57.479 "read": true, 00:15:57.479 "write": true, 00:15:57.479 "unmap": true, 00:15:57.479 "flush": true, 00:15:57.479 "reset": true, 00:15:57.479 "nvme_admin": false, 00:15:57.479 "nvme_io": false, 00:15:57.479 "nvme_io_md": false, 00:15:57.479 "write_zeroes": true, 00:15:57.479 "zcopy": true, 00:15:57.479 "get_zone_info": false, 00:15:57.479 "zone_management": false, 00:15:57.479 "zone_append": false, 00:15:57.479 "compare": false, 00:15:57.479 "compare_and_write": false, 00:15:57.479 "abort": true, 00:15:57.479 "seek_hole": false, 00:15:57.479 "seek_data": false, 00:15:57.479 "copy": true, 00:15:57.479 "nvme_iov_md": false 00:15:57.479 }, 00:15:57.479 "memory_domains": [ 00:15:57.479 { 00:15:57.479 "dma_device_id": "system", 00:15:57.479 "dma_device_type": 1 00:15:57.480 }, 00:15:57.480 { 00:15:57.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.480 "dma_device_type": 2 00:15:57.480 } 00:15:57.480 ], 00:15:57.480 "driver_specific": {} 00:15:57.480 } 00:15:57.480 ] 00:15:57.480 12:14:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.480 12:14:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:57.480 12:14:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:57.480 12:14:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:57.480 12:14:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:57.480 12:14:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:57.480 12:14:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:57.480 12:14:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:57.480 12:14:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:57.480 12:14:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:57.480 12:14:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.480 12:14:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.480 12:14:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.480 12:14:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.480 12:14:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.480 12:14:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.480 12:14:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:57.480 12:14:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.480 12:14:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.480 12:14:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.480 "name": "Existed_Raid", 00:15:57.480 "uuid": "2229b9e6-2047-4ca4-82f7-51930d82920d", 00:15:57.480 "strip_size_kb": 64, 00:15:57.480 "state": "configuring", 00:15:57.480 "raid_level": "concat", 00:15:57.480 "superblock": true, 00:15:57.480 "num_base_bdevs": 4, 00:15:57.480 "num_base_bdevs_discovered": 3, 00:15:57.480 "num_base_bdevs_operational": 4, 00:15:57.480 "base_bdevs_list": [ 00:15:57.480 { 00:15:57.480 "name": "BaseBdev1", 00:15:57.480 "uuid": "b74d63bb-c7eb-4c97-b384-32ae7b97fcba", 00:15:57.480 "is_configured": true, 00:15:57.480 "data_offset": 2048, 00:15:57.480 "data_size": 63488 00:15:57.480 }, 00:15:57.480 { 00:15:57.480 "name": "BaseBdev2", 00:15:57.480 "uuid": "2d218c52-339f-43aa-96e2-108cc4f2be56", 00:15:57.480 "is_configured": true, 00:15:57.480 "data_offset": 2048, 00:15:57.480 "data_size": 63488 00:15:57.480 }, 00:15:57.480 { 00:15:57.480 "name": "BaseBdev3", 00:15:57.480 "uuid": "594cb7cd-e4fb-442d-a939-cd235525d125", 00:15:57.480 "is_configured": true, 00:15:57.480 "data_offset": 2048, 00:15:57.480 "data_size": 63488 00:15:57.480 }, 00:15:57.480 { 00:15:57.480 "name": "BaseBdev4", 00:15:57.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.480 "is_configured": false, 00:15:57.480 "data_offset": 0, 00:15:57.480 "data_size": 0 00:15:57.480 } 00:15:57.480 ] 00:15:57.480 }' 00:15:57.480 12:14:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.480 12:14:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.750 12:14:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:57.750 12:14:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.750 12:14:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.015 [2024-11-25 12:14:53.859705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:58.015 [2024-11-25 12:14:53.860427] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:58.015 [2024-11-25 12:14:53.860453] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:58.015 BaseBdev4 00:15:58.015 [2024-11-25 12:14:53.860797] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:58.016 [2024-11-25 12:14:53.861011] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:58.016 [2024-11-25 12:14:53.861033] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:58.016 [2024-11-25 12:14:53.861224] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:58.016 12:14:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.016 12:14:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:58.016 12:14:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:58.016 12:14:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:58.016 12:14:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:58.016 12:14:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:58.016 12:14:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:58.016 12:14:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:58.016 12:14:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.016 12:14:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.016 12:14:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.016 12:14:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:58.016 12:14:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.016 12:14:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.016 [ 00:15:58.016 { 00:15:58.016 "name": "BaseBdev4", 00:15:58.016 "aliases": [ 00:15:58.016 "a7c33f84-210a-4b5c-b81e-d39129e6c05e" 00:15:58.016 ], 00:15:58.016 "product_name": "Malloc disk", 00:15:58.016 "block_size": 512, 00:15:58.016 "num_blocks": 65536, 00:15:58.016 "uuid": "a7c33f84-210a-4b5c-b81e-d39129e6c05e", 00:15:58.016 "assigned_rate_limits": { 00:15:58.016 "rw_ios_per_sec": 0, 00:15:58.016 "rw_mbytes_per_sec": 0, 00:15:58.016 "r_mbytes_per_sec": 0, 00:15:58.016 "w_mbytes_per_sec": 0 00:15:58.016 }, 00:15:58.016 "claimed": true, 00:15:58.016 "claim_type": "exclusive_write", 00:15:58.016 "zoned": false, 00:15:58.016 "supported_io_types": { 00:15:58.016 "read": true, 00:15:58.016 "write": true, 00:15:58.016 "unmap": true, 00:15:58.016 "flush": true, 00:15:58.016 "reset": true, 00:15:58.016 "nvme_admin": false, 00:15:58.016 "nvme_io": false, 00:15:58.016 "nvme_io_md": false, 00:15:58.016 "write_zeroes": true, 00:15:58.016 "zcopy": true, 00:15:58.016 "get_zone_info": false, 00:15:58.016 "zone_management": false, 00:15:58.016 "zone_append": false, 00:15:58.016 "compare": false, 00:15:58.016 "compare_and_write": false, 00:15:58.016 "abort": true, 00:15:58.016 "seek_hole": false, 00:15:58.016 "seek_data": false, 00:15:58.016 "copy": true, 00:15:58.016 "nvme_iov_md": false 00:15:58.016 }, 00:15:58.016 "memory_domains": [ 00:15:58.016 { 00:15:58.016 "dma_device_id": "system", 00:15:58.016 "dma_device_type": 1 00:15:58.016 }, 00:15:58.016 { 00:15:58.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:58.016 "dma_device_type": 2 00:15:58.016 } 00:15:58.016 ], 00:15:58.016 "driver_specific": {} 00:15:58.016 } 00:15:58.016 ] 00:15:58.016 12:14:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.016 12:14:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:58.016 12:14:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:58.016 12:14:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:58.016 12:14:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:15:58.016 12:14:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:58.016 12:14:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:58.016 12:14:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:58.016 12:14:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:58.016 12:14:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:58.016 12:14:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.016 12:14:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.016 12:14:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.016 12:14:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.016 12:14:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.016 12:14:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:58.016 12:14:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.016 12:14:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.016 12:14:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.016 12:14:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.016 "name": "Existed_Raid", 00:15:58.016 "uuid": "2229b9e6-2047-4ca4-82f7-51930d82920d", 00:15:58.016 "strip_size_kb": 64, 00:15:58.016 "state": "online", 00:15:58.016 "raid_level": "concat", 00:15:58.016 "superblock": true, 00:15:58.016 "num_base_bdevs": 4, 00:15:58.016 "num_base_bdevs_discovered": 4, 00:15:58.016 "num_base_bdevs_operational": 4, 00:15:58.016 "base_bdevs_list": [ 00:15:58.016 { 00:15:58.016 "name": "BaseBdev1", 00:15:58.016 "uuid": "b74d63bb-c7eb-4c97-b384-32ae7b97fcba", 00:15:58.016 "is_configured": true, 00:15:58.016 "data_offset": 2048, 00:15:58.016 "data_size": 63488 00:15:58.016 }, 00:15:58.016 { 00:15:58.016 "name": "BaseBdev2", 00:15:58.016 "uuid": "2d218c52-339f-43aa-96e2-108cc4f2be56", 00:15:58.016 "is_configured": true, 00:15:58.016 "data_offset": 2048, 00:15:58.016 "data_size": 63488 00:15:58.016 }, 00:15:58.016 { 00:15:58.016 "name": "BaseBdev3", 00:15:58.016 "uuid": "594cb7cd-e4fb-442d-a939-cd235525d125", 00:15:58.016 "is_configured": true, 00:15:58.016 "data_offset": 2048, 00:15:58.016 "data_size": 63488 00:15:58.016 }, 00:15:58.016 { 00:15:58.016 "name": "BaseBdev4", 00:15:58.016 "uuid": "a7c33f84-210a-4b5c-b81e-d39129e6c05e", 00:15:58.016 "is_configured": true, 00:15:58.016 "data_offset": 2048, 00:15:58.016 "data_size": 63488 00:15:58.016 } 00:15:58.016 ] 00:15:58.016 }' 00:15:58.016 12:14:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.016 12:14:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.584 12:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:58.584 12:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:58.584 12:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:58.584 12:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:58.584 12:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:58.584 12:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:58.584 12:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:58.584 12:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:58.584 12:14:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.584 12:14:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.584 [2024-11-25 12:14:54.436488] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:58.584 12:14:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.584 12:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:58.584 "name": "Existed_Raid", 00:15:58.584 "aliases": [ 00:15:58.584 "2229b9e6-2047-4ca4-82f7-51930d82920d" 00:15:58.584 ], 00:15:58.584 "product_name": "Raid Volume", 00:15:58.584 "block_size": 512, 00:15:58.584 "num_blocks": 253952, 00:15:58.584 "uuid": "2229b9e6-2047-4ca4-82f7-51930d82920d", 00:15:58.584 "assigned_rate_limits": { 00:15:58.584 "rw_ios_per_sec": 0, 00:15:58.584 "rw_mbytes_per_sec": 0, 00:15:58.584 "r_mbytes_per_sec": 0, 00:15:58.584 "w_mbytes_per_sec": 0 00:15:58.584 }, 00:15:58.584 "claimed": false, 00:15:58.584 "zoned": false, 00:15:58.584 "supported_io_types": { 00:15:58.584 "read": true, 00:15:58.584 "write": true, 00:15:58.584 "unmap": true, 00:15:58.584 "flush": true, 00:15:58.584 "reset": true, 00:15:58.584 "nvme_admin": false, 00:15:58.584 "nvme_io": false, 00:15:58.584 "nvme_io_md": false, 00:15:58.584 "write_zeroes": true, 00:15:58.584 "zcopy": false, 00:15:58.584 "get_zone_info": false, 00:15:58.584 "zone_management": false, 00:15:58.584 "zone_append": false, 00:15:58.584 "compare": false, 00:15:58.584 "compare_and_write": false, 00:15:58.584 "abort": false, 00:15:58.584 "seek_hole": false, 00:15:58.584 "seek_data": false, 00:15:58.584 "copy": false, 00:15:58.584 "nvme_iov_md": false 00:15:58.584 }, 00:15:58.584 "memory_domains": [ 00:15:58.584 { 00:15:58.584 "dma_device_id": "system", 00:15:58.584 "dma_device_type": 1 00:15:58.584 }, 00:15:58.584 { 00:15:58.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:58.585 "dma_device_type": 2 00:15:58.585 }, 00:15:58.585 { 00:15:58.585 "dma_device_id": "system", 00:15:58.585 "dma_device_type": 1 00:15:58.585 }, 00:15:58.585 { 00:15:58.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:58.585 "dma_device_type": 2 00:15:58.585 }, 00:15:58.585 { 00:15:58.585 "dma_device_id": "system", 00:15:58.585 "dma_device_type": 1 00:15:58.585 }, 00:15:58.585 { 00:15:58.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:58.585 "dma_device_type": 2 00:15:58.585 }, 00:15:58.585 { 00:15:58.585 "dma_device_id": "system", 00:15:58.585 "dma_device_type": 1 00:15:58.585 }, 00:15:58.585 { 00:15:58.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:58.585 "dma_device_type": 2 00:15:58.585 } 00:15:58.585 ], 00:15:58.585 "driver_specific": { 00:15:58.585 "raid": { 00:15:58.585 "uuid": "2229b9e6-2047-4ca4-82f7-51930d82920d", 00:15:58.585 "strip_size_kb": 64, 00:15:58.585 "state": "online", 00:15:58.585 "raid_level": "concat", 00:15:58.585 "superblock": true, 00:15:58.585 "num_base_bdevs": 4, 00:15:58.585 "num_base_bdevs_discovered": 4, 00:15:58.585 "num_base_bdevs_operational": 4, 00:15:58.585 "base_bdevs_list": [ 00:15:58.585 { 00:15:58.585 "name": "BaseBdev1", 00:15:58.585 "uuid": "b74d63bb-c7eb-4c97-b384-32ae7b97fcba", 00:15:58.585 "is_configured": true, 00:15:58.585 "data_offset": 2048, 00:15:58.585 "data_size": 63488 00:15:58.585 }, 00:15:58.585 { 00:15:58.585 "name": "BaseBdev2", 00:15:58.585 "uuid": "2d218c52-339f-43aa-96e2-108cc4f2be56", 00:15:58.585 "is_configured": true, 00:15:58.585 "data_offset": 2048, 00:15:58.585 "data_size": 63488 00:15:58.585 }, 00:15:58.585 { 00:15:58.585 "name": "BaseBdev3", 00:15:58.585 "uuid": "594cb7cd-e4fb-442d-a939-cd235525d125", 00:15:58.585 "is_configured": true, 00:15:58.585 "data_offset": 2048, 00:15:58.585 "data_size": 63488 00:15:58.585 }, 00:15:58.585 { 00:15:58.585 "name": "BaseBdev4", 00:15:58.585 "uuid": "a7c33f84-210a-4b5c-b81e-d39129e6c05e", 00:15:58.585 "is_configured": true, 00:15:58.585 "data_offset": 2048, 00:15:58.585 "data_size": 63488 00:15:58.585 } 00:15:58.585 ] 00:15:58.585 } 00:15:58.585 } 00:15:58.585 }' 00:15:58.585 12:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:58.585 12:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:58.585 BaseBdev2 00:15:58.585 BaseBdev3 00:15:58.585 BaseBdev4' 00:15:58.585 12:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:58.585 12:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:58.585 12:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:58.585 12:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:58.585 12:14:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.585 12:14:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.585 12:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:58.585 12:14:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.585 12:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:58.585 12:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:58.585 12:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:58.585 12:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:58.585 12:14:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.585 12:14:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.585 12:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:58.585 12:14:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.844 12:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:58.845 12:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:58.845 12:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:58.845 12:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:58.845 12:14:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.845 12:14:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.845 12:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:58.845 12:14:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.845 12:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:58.845 12:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:58.845 12:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:58.845 12:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:58.845 12:14:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.845 12:14:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.845 12:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:58.845 12:14:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.845 12:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:58.845 12:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:58.845 12:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:58.845 12:14:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.845 12:14:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.845 [2024-11-25 12:14:54.792277] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:58.845 [2024-11-25 12:14:54.792352] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:58.845 [2024-11-25 12:14:54.792444] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:58.845 12:14:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.845 12:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:58.845 12:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:15:58.845 12:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:58.845 12:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:15:58.845 12:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:15:58.845 12:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:15:58.845 12:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:58.845 12:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:15:58.845 12:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:58.845 12:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:58.845 12:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:58.845 12:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.845 12:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.845 12:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.845 12:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.845 12:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.845 12:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:58.845 12:14:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.845 12:14:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.845 12:14:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.103 12:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.103 "name": "Existed_Raid", 00:15:59.103 "uuid": "2229b9e6-2047-4ca4-82f7-51930d82920d", 00:15:59.103 "strip_size_kb": 64, 00:15:59.103 "state": "offline", 00:15:59.103 "raid_level": "concat", 00:15:59.103 "superblock": true, 00:15:59.103 "num_base_bdevs": 4, 00:15:59.103 "num_base_bdevs_discovered": 3, 00:15:59.103 "num_base_bdevs_operational": 3, 00:15:59.103 "base_bdevs_list": [ 00:15:59.103 { 00:15:59.103 "name": null, 00:15:59.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.103 "is_configured": false, 00:15:59.103 "data_offset": 0, 00:15:59.103 "data_size": 63488 00:15:59.103 }, 00:15:59.103 { 00:15:59.103 "name": "BaseBdev2", 00:15:59.103 "uuid": "2d218c52-339f-43aa-96e2-108cc4f2be56", 00:15:59.103 "is_configured": true, 00:15:59.103 "data_offset": 2048, 00:15:59.103 "data_size": 63488 00:15:59.103 }, 00:15:59.103 { 00:15:59.103 "name": "BaseBdev3", 00:15:59.103 "uuid": "594cb7cd-e4fb-442d-a939-cd235525d125", 00:15:59.103 "is_configured": true, 00:15:59.103 "data_offset": 2048, 00:15:59.103 "data_size": 63488 00:15:59.103 }, 00:15:59.103 { 00:15:59.103 "name": "BaseBdev4", 00:15:59.103 "uuid": "a7c33f84-210a-4b5c-b81e-d39129e6c05e", 00:15:59.103 "is_configured": true, 00:15:59.103 "data_offset": 2048, 00:15:59.103 "data_size": 63488 00:15:59.103 } 00:15:59.103 ] 00:15:59.103 }' 00:15:59.103 12:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.103 12:14:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.361 12:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:59.361 12:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:59.361 12:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.361 12:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.361 12:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.361 12:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:59.361 12:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.361 12:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:59.361 12:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:59.361 12:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:59.361 12:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.361 12:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.361 [2024-11-25 12:14:55.436691] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:59.620 12:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.620 12:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:59.620 12:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:59.620 12:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.620 12:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.620 12:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.621 12:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:59.621 12:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.621 12:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:59.621 12:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:59.621 12:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:59.621 12:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.621 12:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.621 [2024-11-25 12:14:55.610970] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:59.621 12:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.621 12:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:59.621 12:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:59.621 12:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.621 12:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:59.621 12:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.621 12:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.880 12:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.880 12:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:59.880 12:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:59.880 12:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:59.880 12:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.880 12:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.880 [2024-11-25 12:14:55.764266] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:59.880 [2024-11-25 12:14:55.764391] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:59.880 12:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.880 12:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:59.880 12:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:59.880 12:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.880 12:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:59.880 12:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.880 12:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.880 12:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.880 12:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:59.880 12:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:59.880 12:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:59.880 12:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:59.880 12:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:59.880 12:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:59.880 12:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.880 12:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.880 BaseBdev2 00:15:59.880 12:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.880 12:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:59.880 12:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:59.880 12:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:59.880 12:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:59.880 12:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:59.880 12:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:59.880 12:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:59.880 12:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.880 12:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.140 12:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.140 12:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:00.140 12:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.140 12:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.140 [ 00:16:00.140 { 00:16:00.140 "name": "BaseBdev2", 00:16:00.140 "aliases": [ 00:16:00.140 "b01b56c0-9b4e-4117-9158-bf378b85c3a9" 00:16:00.140 ], 00:16:00.140 "product_name": "Malloc disk", 00:16:00.140 "block_size": 512, 00:16:00.140 "num_blocks": 65536, 00:16:00.140 "uuid": "b01b56c0-9b4e-4117-9158-bf378b85c3a9", 00:16:00.140 "assigned_rate_limits": { 00:16:00.140 "rw_ios_per_sec": 0, 00:16:00.140 "rw_mbytes_per_sec": 0, 00:16:00.140 "r_mbytes_per_sec": 0, 00:16:00.140 "w_mbytes_per_sec": 0 00:16:00.140 }, 00:16:00.140 "claimed": false, 00:16:00.140 "zoned": false, 00:16:00.140 "supported_io_types": { 00:16:00.140 "read": true, 00:16:00.140 "write": true, 00:16:00.140 "unmap": true, 00:16:00.140 "flush": true, 00:16:00.140 "reset": true, 00:16:00.140 "nvme_admin": false, 00:16:00.140 "nvme_io": false, 00:16:00.140 "nvme_io_md": false, 00:16:00.140 "write_zeroes": true, 00:16:00.140 "zcopy": true, 00:16:00.140 "get_zone_info": false, 00:16:00.140 "zone_management": false, 00:16:00.140 "zone_append": false, 00:16:00.140 "compare": false, 00:16:00.140 "compare_and_write": false, 00:16:00.140 "abort": true, 00:16:00.140 "seek_hole": false, 00:16:00.140 "seek_data": false, 00:16:00.140 "copy": true, 00:16:00.140 "nvme_iov_md": false 00:16:00.140 }, 00:16:00.140 "memory_domains": [ 00:16:00.140 { 00:16:00.140 "dma_device_id": "system", 00:16:00.140 "dma_device_type": 1 00:16:00.140 }, 00:16:00.140 { 00:16:00.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:00.140 "dma_device_type": 2 00:16:00.140 } 00:16:00.140 ], 00:16:00.140 "driver_specific": {} 00:16:00.140 } 00:16:00.140 ] 00:16:00.140 12:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.140 12:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:00.140 12:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:00.141 12:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:00.141 12:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:00.141 12:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.141 12:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.141 BaseBdev3 00:16:00.141 12:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.141 12:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:00.141 12:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:00.141 12:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:00.141 12:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:00.141 12:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:00.141 12:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:00.141 12:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:00.141 12:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.141 12:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.141 12:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.141 12:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:00.141 12:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.141 12:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.141 [ 00:16:00.141 { 00:16:00.141 "name": "BaseBdev3", 00:16:00.141 "aliases": [ 00:16:00.141 "3699438a-6b1d-4133-a029-966012c06102" 00:16:00.141 ], 00:16:00.141 "product_name": "Malloc disk", 00:16:00.141 "block_size": 512, 00:16:00.141 "num_blocks": 65536, 00:16:00.141 "uuid": "3699438a-6b1d-4133-a029-966012c06102", 00:16:00.141 "assigned_rate_limits": { 00:16:00.141 "rw_ios_per_sec": 0, 00:16:00.141 "rw_mbytes_per_sec": 0, 00:16:00.141 "r_mbytes_per_sec": 0, 00:16:00.141 "w_mbytes_per_sec": 0 00:16:00.141 }, 00:16:00.141 "claimed": false, 00:16:00.141 "zoned": false, 00:16:00.141 "supported_io_types": { 00:16:00.141 "read": true, 00:16:00.141 "write": true, 00:16:00.141 "unmap": true, 00:16:00.141 "flush": true, 00:16:00.141 "reset": true, 00:16:00.141 "nvme_admin": false, 00:16:00.141 "nvme_io": false, 00:16:00.141 "nvme_io_md": false, 00:16:00.141 "write_zeroes": true, 00:16:00.141 "zcopy": true, 00:16:00.141 "get_zone_info": false, 00:16:00.141 "zone_management": false, 00:16:00.141 "zone_append": false, 00:16:00.141 "compare": false, 00:16:00.141 "compare_and_write": false, 00:16:00.141 "abort": true, 00:16:00.141 "seek_hole": false, 00:16:00.141 "seek_data": false, 00:16:00.141 "copy": true, 00:16:00.141 "nvme_iov_md": false 00:16:00.141 }, 00:16:00.141 "memory_domains": [ 00:16:00.141 { 00:16:00.141 "dma_device_id": "system", 00:16:00.141 "dma_device_type": 1 00:16:00.141 }, 00:16:00.141 { 00:16:00.141 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:00.141 "dma_device_type": 2 00:16:00.141 } 00:16:00.141 ], 00:16:00.141 "driver_specific": {} 00:16:00.141 } 00:16:00.141 ] 00:16:00.141 12:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.141 12:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:00.141 12:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:00.141 12:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:00.141 12:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:00.141 12:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.141 12:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.141 BaseBdev4 00:16:00.141 12:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.141 12:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:00.141 12:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:00.141 12:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:00.141 12:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:00.141 12:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:00.141 12:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:00.141 12:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:00.141 12:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.141 12:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.141 12:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.141 12:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:00.141 12:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.141 12:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.141 [ 00:16:00.141 { 00:16:00.141 "name": "BaseBdev4", 00:16:00.141 "aliases": [ 00:16:00.141 "de315c56-ddbe-4619-8a0b-35c7f0b6c21f" 00:16:00.141 ], 00:16:00.141 "product_name": "Malloc disk", 00:16:00.141 "block_size": 512, 00:16:00.141 "num_blocks": 65536, 00:16:00.141 "uuid": "de315c56-ddbe-4619-8a0b-35c7f0b6c21f", 00:16:00.141 "assigned_rate_limits": { 00:16:00.141 "rw_ios_per_sec": 0, 00:16:00.141 "rw_mbytes_per_sec": 0, 00:16:00.141 "r_mbytes_per_sec": 0, 00:16:00.141 "w_mbytes_per_sec": 0 00:16:00.141 }, 00:16:00.141 "claimed": false, 00:16:00.141 "zoned": false, 00:16:00.141 "supported_io_types": { 00:16:00.141 "read": true, 00:16:00.141 "write": true, 00:16:00.141 "unmap": true, 00:16:00.141 "flush": true, 00:16:00.141 "reset": true, 00:16:00.141 "nvme_admin": false, 00:16:00.141 "nvme_io": false, 00:16:00.141 "nvme_io_md": false, 00:16:00.141 "write_zeroes": true, 00:16:00.141 "zcopy": true, 00:16:00.141 "get_zone_info": false, 00:16:00.141 "zone_management": false, 00:16:00.141 "zone_append": false, 00:16:00.141 "compare": false, 00:16:00.141 "compare_and_write": false, 00:16:00.141 "abort": true, 00:16:00.141 "seek_hole": false, 00:16:00.141 "seek_data": false, 00:16:00.141 "copy": true, 00:16:00.141 "nvme_iov_md": false 00:16:00.141 }, 00:16:00.141 "memory_domains": [ 00:16:00.141 { 00:16:00.141 "dma_device_id": "system", 00:16:00.141 "dma_device_type": 1 00:16:00.141 }, 00:16:00.141 { 00:16:00.141 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:00.141 "dma_device_type": 2 00:16:00.141 } 00:16:00.141 ], 00:16:00.141 "driver_specific": {} 00:16:00.141 } 00:16:00.141 ] 00:16:00.141 12:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.142 12:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:00.142 12:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:00.142 12:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:00.142 12:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:00.142 12:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.142 12:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.142 [2024-11-25 12:14:56.157847] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:00.142 [2024-11-25 12:14:56.158226] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:00.142 [2024-11-25 12:14:56.158286] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:00.142 [2024-11-25 12:14:56.160938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:00.142 [2024-11-25 12:14:56.161012] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:00.142 12:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.142 12:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:00.142 12:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:00.142 12:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:00.142 12:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:00.142 12:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:00.142 12:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:00.142 12:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.142 12:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.142 12:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.142 12:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.142 12:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:00.142 12:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.142 12:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.142 12:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.142 12:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.142 12:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.142 "name": "Existed_Raid", 00:16:00.142 "uuid": "de8b2800-5e32-4868-9d71-554419fd336b", 00:16:00.142 "strip_size_kb": 64, 00:16:00.142 "state": "configuring", 00:16:00.142 "raid_level": "concat", 00:16:00.142 "superblock": true, 00:16:00.142 "num_base_bdevs": 4, 00:16:00.142 "num_base_bdevs_discovered": 3, 00:16:00.142 "num_base_bdevs_operational": 4, 00:16:00.142 "base_bdevs_list": [ 00:16:00.142 { 00:16:00.142 "name": "BaseBdev1", 00:16:00.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.142 "is_configured": false, 00:16:00.142 "data_offset": 0, 00:16:00.142 "data_size": 0 00:16:00.142 }, 00:16:00.142 { 00:16:00.142 "name": "BaseBdev2", 00:16:00.142 "uuid": "b01b56c0-9b4e-4117-9158-bf378b85c3a9", 00:16:00.142 "is_configured": true, 00:16:00.142 "data_offset": 2048, 00:16:00.142 "data_size": 63488 00:16:00.142 }, 00:16:00.142 { 00:16:00.142 "name": "BaseBdev3", 00:16:00.142 "uuid": "3699438a-6b1d-4133-a029-966012c06102", 00:16:00.142 "is_configured": true, 00:16:00.142 "data_offset": 2048, 00:16:00.142 "data_size": 63488 00:16:00.142 }, 00:16:00.142 { 00:16:00.142 "name": "BaseBdev4", 00:16:00.142 "uuid": "de315c56-ddbe-4619-8a0b-35c7f0b6c21f", 00:16:00.142 "is_configured": true, 00:16:00.142 "data_offset": 2048, 00:16:00.142 "data_size": 63488 00:16:00.142 } 00:16:00.142 ] 00:16:00.142 }' 00:16:00.142 12:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.142 12:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.798 12:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:00.798 12:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.798 12:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.798 [2024-11-25 12:14:56.642000] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:00.798 12:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.798 12:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:00.798 12:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:00.798 12:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:00.798 12:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:00.798 12:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:00.798 12:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:00.798 12:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.798 12:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.798 12:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.798 12:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.798 12:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.798 12:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:00.798 12:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.798 12:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.798 12:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.798 12:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.798 "name": "Existed_Raid", 00:16:00.798 "uuid": "de8b2800-5e32-4868-9d71-554419fd336b", 00:16:00.798 "strip_size_kb": 64, 00:16:00.798 "state": "configuring", 00:16:00.798 "raid_level": "concat", 00:16:00.798 "superblock": true, 00:16:00.798 "num_base_bdevs": 4, 00:16:00.798 "num_base_bdevs_discovered": 2, 00:16:00.798 "num_base_bdevs_operational": 4, 00:16:00.798 "base_bdevs_list": [ 00:16:00.798 { 00:16:00.798 "name": "BaseBdev1", 00:16:00.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.798 "is_configured": false, 00:16:00.798 "data_offset": 0, 00:16:00.798 "data_size": 0 00:16:00.798 }, 00:16:00.798 { 00:16:00.798 "name": null, 00:16:00.798 "uuid": "b01b56c0-9b4e-4117-9158-bf378b85c3a9", 00:16:00.798 "is_configured": false, 00:16:00.798 "data_offset": 0, 00:16:00.798 "data_size": 63488 00:16:00.798 }, 00:16:00.798 { 00:16:00.798 "name": "BaseBdev3", 00:16:00.798 "uuid": "3699438a-6b1d-4133-a029-966012c06102", 00:16:00.798 "is_configured": true, 00:16:00.798 "data_offset": 2048, 00:16:00.798 "data_size": 63488 00:16:00.798 }, 00:16:00.798 { 00:16:00.798 "name": "BaseBdev4", 00:16:00.798 "uuid": "de315c56-ddbe-4619-8a0b-35c7f0b6c21f", 00:16:00.798 "is_configured": true, 00:16:00.798 "data_offset": 2048, 00:16:00.798 "data_size": 63488 00:16:00.798 } 00:16:00.798 ] 00:16:00.798 }' 00:16:00.798 12:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.798 12:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.058 12:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.058 12:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.058 12:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:01.058 12:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.318 12:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.318 12:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:01.318 12:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:01.318 12:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.318 12:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.319 [2024-11-25 12:14:57.227751] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:01.319 BaseBdev1 00:16:01.319 12:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.319 12:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:01.319 12:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:01.319 12:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:01.319 12:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:01.319 12:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:01.319 12:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:01.319 12:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:01.319 12:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.319 12:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.319 12:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.319 12:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:01.319 12:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.319 12:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.319 [ 00:16:01.319 { 00:16:01.319 "name": "BaseBdev1", 00:16:01.319 "aliases": [ 00:16:01.319 "2b834779-4584-4c57-a150-ac4ef62a3308" 00:16:01.319 ], 00:16:01.319 "product_name": "Malloc disk", 00:16:01.319 "block_size": 512, 00:16:01.319 "num_blocks": 65536, 00:16:01.319 "uuid": "2b834779-4584-4c57-a150-ac4ef62a3308", 00:16:01.319 "assigned_rate_limits": { 00:16:01.319 "rw_ios_per_sec": 0, 00:16:01.319 "rw_mbytes_per_sec": 0, 00:16:01.319 "r_mbytes_per_sec": 0, 00:16:01.319 "w_mbytes_per_sec": 0 00:16:01.319 }, 00:16:01.319 "claimed": true, 00:16:01.319 "claim_type": "exclusive_write", 00:16:01.319 "zoned": false, 00:16:01.319 "supported_io_types": { 00:16:01.319 "read": true, 00:16:01.319 "write": true, 00:16:01.319 "unmap": true, 00:16:01.319 "flush": true, 00:16:01.319 "reset": true, 00:16:01.319 "nvme_admin": false, 00:16:01.319 "nvme_io": false, 00:16:01.319 "nvme_io_md": false, 00:16:01.319 "write_zeroes": true, 00:16:01.319 "zcopy": true, 00:16:01.319 "get_zone_info": false, 00:16:01.319 "zone_management": false, 00:16:01.319 "zone_append": false, 00:16:01.319 "compare": false, 00:16:01.319 "compare_and_write": false, 00:16:01.319 "abort": true, 00:16:01.319 "seek_hole": false, 00:16:01.319 "seek_data": false, 00:16:01.319 "copy": true, 00:16:01.319 "nvme_iov_md": false 00:16:01.319 }, 00:16:01.319 "memory_domains": [ 00:16:01.319 { 00:16:01.319 "dma_device_id": "system", 00:16:01.319 "dma_device_type": 1 00:16:01.319 }, 00:16:01.319 { 00:16:01.319 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:01.319 "dma_device_type": 2 00:16:01.319 } 00:16:01.319 ], 00:16:01.319 "driver_specific": {} 00:16:01.319 } 00:16:01.319 ] 00:16:01.319 12:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.319 12:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:01.319 12:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:01.319 12:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:01.319 12:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:01.319 12:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:01.319 12:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:01.319 12:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:01.319 12:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.319 12:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.319 12:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.319 12:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.319 12:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:01.319 12:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.319 12:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.319 12:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.319 12:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.319 12:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.319 "name": "Existed_Raid", 00:16:01.319 "uuid": "de8b2800-5e32-4868-9d71-554419fd336b", 00:16:01.319 "strip_size_kb": 64, 00:16:01.319 "state": "configuring", 00:16:01.319 "raid_level": "concat", 00:16:01.319 "superblock": true, 00:16:01.319 "num_base_bdevs": 4, 00:16:01.319 "num_base_bdevs_discovered": 3, 00:16:01.319 "num_base_bdevs_operational": 4, 00:16:01.319 "base_bdevs_list": [ 00:16:01.319 { 00:16:01.319 "name": "BaseBdev1", 00:16:01.319 "uuid": "2b834779-4584-4c57-a150-ac4ef62a3308", 00:16:01.319 "is_configured": true, 00:16:01.319 "data_offset": 2048, 00:16:01.319 "data_size": 63488 00:16:01.319 }, 00:16:01.319 { 00:16:01.319 "name": null, 00:16:01.319 "uuid": "b01b56c0-9b4e-4117-9158-bf378b85c3a9", 00:16:01.319 "is_configured": false, 00:16:01.319 "data_offset": 0, 00:16:01.319 "data_size": 63488 00:16:01.319 }, 00:16:01.319 { 00:16:01.319 "name": "BaseBdev3", 00:16:01.319 "uuid": "3699438a-6b1d-4133-a029-966012c06102", 00:16:01.319 "is_configured": true, 00:16:01.319 "data_offset": 2048, 00:16:01.319 "data_size": 63488 00:16:01.319 }, 00:16:01.319 { 00:16:01.319 "name": "BaseBdev4", 00:16:01.319 "uuid": "de315c56-ddbe-4619-8a0b-35c7f0b6c21f", 00:16:01.319 "is_configured": true, 00:16:01.319 "data_offset": 2048, 00:16:01.319 "data_size": 63488 00:16:01.319 } 00:16:01.319 ] 00:16:01.319 }' 00:16:01.319 12:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.319 12:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.887 12:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.887 12:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.887 12:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.887 12:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:01.887 12:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.887 12:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:01.887 12:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:01.887 12:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.887 12:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.887 [2024-11-25 12:14:57.764038] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:01.887 12:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.887 12:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:01.887 12:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:01.887 12:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:01.887 12:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:01.887 12:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:01.887 12:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:01.887 12:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.887 12:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.887 12:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.887 12:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.887 12:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.887 12:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:01.887 12:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.887 12:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.887 12:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.887 12:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.887 "name": "Existed_Raid", 00:16:01.887 "uuid": "de8b2800-5e32-4868-9d71-554419fd336b", 00:16:01.887 "strip_size_kb": 64, 00:16:01.887 "state": "configuring", 00:16:01.887 "raid_level": "concat", 00:16:01.887 "superblock": true, 00:16:01.887 "num_base_bdevs": 4, 00:16:01.887 "num_base_bdevs_discovered": 2, 00:16:01.887 "num_base_bdevs_operational": 4, 00:16:01.887 "base_bdevs_list": [ 00:16:01.887 { 00:16:01.887 "name": "BaseBdev1", 00:16:01.887 "uuid": "2b834779-4584-4c57-a150-ac4ef62a3308", 00:16:01.887 "is_configured": true, 00:16:01.887 "data_offset": 2048, 00:16:01.887 "data_size": 63488 00:16:01.887 }, 00:16:01.887 { 00:16:01.887 "name": null, 00:16:01.887 "uuid": "b01b56c0-9b4e-4117-9158-bf378b85c3a9", 00:16:01.887 "is_configured": false, 00:16:01.887 "data_offset": 0, 00:16:01.887 "data_size": 63488 00:16:01.887 }, 00:16:01.887 { 00:16:01.887 "name": null, 00:16:01.887 "uuid": "3699438a-6b1d-4133-a029-966012c06102", 00:16:01.887 "is_configured": false, 00:16:01.887 "data_offset": 0, 00:16:01.887 "data_size": 63488 00:16:01.887 }, 00:16:01.887 { 00:16:01.887 "name": "BaseBdev4", 00:16:01.887 "uuid": "de315c56-ddbe-4619-8a0b-35c7f0b6c21f", 00:16:01.887 "is_configured": true, 00:16:01.887 "data_offset": 2048, 00:16:01.887 "data_size": 63488 00:16:01.887 } 00:16:01.887 ] 00:16:01.887 }' 00:16:01.887 12:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.887 12:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.455 12:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:02.455 12:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.455 12:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.455 12:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.455 12:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.455 12:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:02.455 12:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:02.455 12:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.455 12:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.455 [2024-11-25 12:14:58.324156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:02.455 12:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.455 12:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:02.455 12:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:02.455 12:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:02.455 12:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:02.455 12:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:02.455 12:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:02.455 12:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.455 12:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.455 12:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.455 12:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.455 12:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.455 12:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.455 12:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:02.455 12:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.455 12:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.455 12:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.455 "name": "Existed_Raid", 00:16:02.455 "uuid": "de8b2800-5e32-4868-9d71-554419fd336b", 00:16:02.455 "strip_size_kb": 64, 00:16:02.455 "state": "configuring", 00:16:02.455 "raid_level": "concat", 00:16:02.455 "superblock": true, 00:16:02.455 "num_base_bdevs": 4, 00:16:02.455 "num_base_bdevs_discovered": 3, 00:16:02.455 "num_base_bdevs_operational": 4, 00:16:02.455 "base_bdevs_list": [ 00:16:02.455 { 00:16:02.455 "name": "BaseBdev1", 00:16:02.455 "uuid": "2b834779-4584-4c57-a150-ac4ef62a3308", 00:16:02.455 "is_configured": true, 00:16:02.455 "data_offset": 2048, 00:16:02.455 "data_size": 63488 00:16:02.455 }, 00:16:02.455 { 00:16:02.455 "name": null, 00:16:02.455 "uuid": "b01b56c0-9b4e-4117-9158-bf378b85c3a9", 00:16:02.455 "is_configured": false, 00:16:02.455 "data_offset": 0, 00:16:02.455 "data_size": 63488 00:16:02.455 }, 00:16:02.455 { 00:16:02.455 "name": "BaseBdev3", 00:16:02.455 "uuid": "3699438a-6b1d-4133-a029-966012c06102", 00:16:02.455 "is_configured": true, 00:16:02.455 "data_offset": 2048, 00:16:02.455 "data_size": 63488 00:16:02.455 }, 00:16:02.455 { 00:16:02.455 "name": "BaseBdev4", 00:16:02.455 "uuid": "de315c56-ddbe-4619-8a0b-35c7f0b6c21f", 00:16:02.455 "is_configured": true, 00:16:02.455 "data_offset": 2048, 00:16:02.455 "data_size": 63488 00:16:02.455 } 00:16:02.455 ] 00:16:02.455 }' 00:16:02.455 12:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.455 12:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.024 12:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:03.024 12:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.024 12:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.024 12:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.024 12:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.024 12:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:03.024 12:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:03.024 12:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.024 12:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.024 [2024-11-25 12:14:58.900405] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:03.024 12:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.024 12:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:03.024 12:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:03.024 12:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:03.024 12:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:03.024 12:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:03.024 12:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:03.024 12:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.024 12:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.024 12:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.024 12:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.024 12:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.024 12:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:03.024 12:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.024 12:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.024 12:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.024 12:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.024 "name": "Existed_Raid", 00:16:03.024 "uuid": "de8b2800-5e32-4868-9d71-554419fd336b", 00:16:03.024 "strip_size_kb": 64, 00:16:03.024 "state": "configuring", 00:16:03.024 "raid_level": "concat", 00:16:03.024 "superblock": true, 00:16:03.024 "num_base_bdevs": 4, 00:16:03.024 "num_base_bdevs_discovered": 2, 00:16:03.024 "num_base_bdevs_operational": 4, 00:16:03.024 "base_bdevs_list": [ 00:16:03.024 { 00:16:03.024 "name": null, 00:16:03.024 "uuid": "2b834779-4584-4c57-a150-ac4ef62a3308", 00:16:03.024 "is_configured": false, 00:16:03.024 "data_offset": 0, 00:16:03.024 "data_size": 63488 00:16:03.024 }, 00:16:03.024 { 00:16:03.024 "name": null, 00:16:03.024 "uuid": "b01b56c0-9b4e-4117-9158-bf378b85c3a9", 00:16:03.024 "is_configured": false, 00:16:03.024 "data_offset": 0, 00:16:03.024 "data_size": 63488 00:16:03.024 }, 00:16:03.024 { 00:16:03.024 "name": "BaseBdev3", 00:16:03.024 "uuid": "3699438a-6b1d-4133-a029-966012c06102", 00:16:03.024 "is_configured": true, 00:16:03.024 "data_offset": 2048, 00:16:03.024 "data_size": 63488 00:16:03.024 }, 00:16:03.024 { 00:16:03.024 "name": "BaseBdev4", 00:16:03.024 "uuid": "de315c56-ddbe-4619-8a0b-35c7f0b6c21f", 00:16:03.024 "is_configured": true, 00:16:03.024 "data_offset": 2048, 00:16:03.024 "data_size": 63488 00:16:03.024 } 00:16:03.024 ] 00:16:03.024 }' 00:16:03.024 12:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.024 12:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.597 12:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.597 12:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:03.597 12:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.597 12:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.597 12:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.597 12:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:03.597 12:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:03.597 12:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.597 12:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.597 [2024-11-25 12:14:59.582182] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:03.597 12:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.597 12:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:03.597 12:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:03.597 12:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:03.597 12:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:03.597 12:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:03.597 12:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:03.597 12:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.597 12:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.597 12:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.597 12:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.597 12:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.597 12:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:03.597 12:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.597 12:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.597 12:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.597 12:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.597 "name": "Existed_Raid", 00:16:03.597 "uuid": "de8b2800-5e32-4868-9d71-554419fd336b", 00:16:03.597 "strip_size_kb": 64, 00:16:03.597 "state": "configuring", 00:16:03.597 "raid_level": "concat", 00:16:03.597 "superblock": true, 00:16:03.597 "num_base_bdevs": 4, 00:16:03.597 "num_base_bdevs_discovered": 3, 00:16:03.597 "num_base_bdevs_operational": 4, 00:16:03.597 "base_bdevs_list": [ 00:16:03.597 { 00:16:03.597 "name": null, 00:16:03.597 "uuid": "2b834779-4584-4c57-a150-ac4ef62a3308", 00:16:03.597 "is_configured": false, 00:16:03.597 "data_offset": 0, 00:16:03.597 "data_size": 63488 00:16:03.597 }, 00:16:03.597 { 00:16:03.597 "name": "BaseBdev2", 00:16:03.597 "uuid": "b01b56c0-9b4e-4117-9158-bf378b85c3a9", 00:16:03.597 "is_configured": true, 00:16:03.597 "data_offset": 2048, 00:16:03.597 "data_size": 63488 00:16:03.597 }, 00:16:03.597 { 00:16:03.597 "name": "BaseBdev3", 00:16:03.597 "uuid": "3699438a-6b1d-4133-a029-966012c06102", 00:16:03.597 "is_configured": true, 00:16:03.597 "data_offset": 2048, 00:16:03.597 "data_size": 63488 00:16:03.597 }, 00:16:03.597 { 00:16:03.597 "name": "BaseBdev4", 00:16:03.597 "uuid": "de315c56-ddbe-4619-8a0b-35c7f0b6c21f", 00:16:03.597 "is_configured": true, 00:16:03.597 "data_offset": 2048, 00:16:03.597 "data_size": 63488 00:16:03.597 } 00:16:03.597 ] 00:16:03.597 }' 00:16:03.597 12:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.597 12:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.165 12:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.165 12:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.165 12:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:04.165 12:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.165 12:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.165 12:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:04.165 12:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.165 12:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.165 12:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:04.165 12:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.165 12:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.165 12:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2b834779-4584-4c57-a150-ac4ef62a3308 00:16:04.165 12:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.165 12:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.165 [2024-11-25 12:15:00.228019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:04.165 [2024-11-25 12:15:00.228370] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:04.165 [2024-11-25 12:15:00.228390] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:04.165 [2024-11-25 12:15:00.228720] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:04.165 NewBaseBdev 00:16:04.165 [2024-11-25 12:15:00.228903] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:04.165 [2024-11-25 12:15:00.228925] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:04.165 [2024-11-25 12:15:00.229082] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:04.165 12:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.166 12:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:04.166 12:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:04.166 12:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:04.166 12:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:04.166 12:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:04.166 12:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:04.166 12:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:04.166 12:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.166 12:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.166 12:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.166 12:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:04.166 12:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.166 12:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.166 [ 00:16:04.166 { 00:16:04.166 "name": "NewBaseBdev", 00:16:04.166 "aliases": [ 00:16:04.166 "2b834779-4584-4c57-a150-ac4ef62a3308" 00:16:04.166 ], 00:16:04.166 "product_name": "Malloc disk", 00:16:04.166 "block_size": 512, 00:16:04.166 "num_blocks": 65536, 00:16:04.166 "uuid": "2b834779-4584-4c57-a150-ac4ef62a3308", 00:16:04.166 "assigned_rate_limits": { 00:16:04.166 "rw_ios_per_sec": 0, 00:16:04.166 "rw_mbytes_per_sec": 0, 00:16:04.166 "r_mbytes_per_sec": 0, 00:16:04.166 "w_mbytes_per_sec": 0 00:16:04.166 }, 00:16:04.166 "claimed": true, 00:16:04.166 "claim_type": "exclusive_write", 00:16:04.166 "zoned": false, 00:16:04.166 "supported_io_types": { 00:16:04.166 "read": true, 00:16:04.166 "write": true, 00:16:04.166 "unmap": true, 00:16:04.424 "flush": true, 00:16:04.425 "reset": true, 00:16:04.425 "nvme_admin": false, 00:16:04.425 "nvme_io": false, 00:16:04.425 "nvme_io_md": false, 00:16:04.425 "write_zeroes": true, 00:16:04.425 "zcopy": true, 00:16:04.425 "get_zone_info": false, 00:16:04.425 "zone_management": false, 00:16:04.425 "zone_append": false, 00:16:04.425 "compare": false, 00:16:04.425 "compare_and_write": false, 00:16:04.425 "abort": true, 00:16:04.425 "seek_hole": false, 00:16:04.425 "seek_data": false, 00:16:04.425 "copy": true, 00:16:04.425 "nvme_iov_md": false 00:16:04.425 }, 00:16:04.425 "memory_domains": [ 00:16:04.425 { 00:16:04.425 "dma_device_id": "system", 00:16:04.425 "dma_device_type": 1 00:16:04.425 }, 00:16:04.425 { 00:16:04.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:04.425 "dma_device_type": 2 00:16:04.425 } 00:16:04.425 ], 00:16:04.425 "driver_specific": {} 00:16:04.425 } 00:16:04.425 ] 00:16:04.425 12:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.425 12:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:04.425 12:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:16:04.425 12:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:04.425 12:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:04.425 12:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:04.425 12:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:04.425 12:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:04.425 12:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.425 12:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.425 12:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.425 12:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.425 12:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.425 12:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:04.425 12:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.425 12:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.425 12:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.425 12:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.425 "name": "Existed_Raid", 00:16:04.425 "uuid": "de8b2800-5e32-4868-9d71-554419fd336b", 00:16:04.425 "strip_size_kb": 64, 00:16:04.425 "state": "online", 00:16:04.425 "raid_level": "concat", 00:16:04.425 "superblock": true, 00:16:04.425 "num_base_bdevs": 4, 00:16:04.425 "num_base_bdevs_discovered": 4, 00:16:04.425 "num_base_bdevs_operational": 4, 00:16:04.425 "base_bdevs_list": [ 00:16:04.425 { 00:16:04.425 "name": "NewBaseBdev", 00:16:04.425 "uuid": "2b834779-4584-4c57-a150-ac4ef62a3308", 00:16:04.425 "is_configured": true, 00:16:04.425 "data_offset": 2048, 00:16:04.425 "data_size": 63488 00:16:04.425 }, 00:16:04.425 { 00:16:04.425 "name": "BaseBdev2", 00:16:04.425 "uuid": "b01b56c0-9b4e-4117-9158-bf378b85c3a9", 00:16:04.425 "is_configured": true, 00:16:04.425 "data_offset": 2048, 00:16:04.425 "data_size": 63488 00:16:04.425 }, 00:16:04.425 { 00:16:04.425 "name": "BaseBdev3", 00:16:04.425 "uuid": "3699438a-6b1d-4133-a029-966012c06102", 00:16:04.425 "is_configured": true, 00:16:04.425 "data_offset": 2048, 00:16:04.425 "data_size": 63488 00:16:04.425 }, 00:16:04.425 { 00:16:04.425 "name": "BaseBdev4", 00:16:04.425 "uuid": "de315c56-ddbe-4619-8a0b-35c7f0b6c21f", 00:16:04.425 "is_configured": true, 00:16:04.425 "data_offset": 2048, 00:16:04.425 "data_size": 63488 00:16:04.425 } 00:16:04.425 ] 00:16:04.425 }' 00:16:04.425 12:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.425 12:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.718 12:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:04.718 12:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:04.718 12:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:04.718 12:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:04.718 12:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:04.718 12:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:04.718 12:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:04.718 12:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.718 12:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.718 12:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:04.718 [2024-11-25 12:15:00.772698] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:04.718 12:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.978 12:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:04.978 "name": "Existed_Raid", 00:16:04.978 "aliases": [ 00:16:04.978 "de8b2800-5e32-4868-9d71-554419fd336b" 00:16:04.978 ], 00:16:04.978 "product_name": "Raid Volume", 00:16:04.978 "block_size": 512, 00:16:04.978 "num_blocks": 253952, 00:16:04.978 "uuid": "de8b2800-5e32-4868-9d71-554419fd336b", 00:16:04.978 "assigned_rate_limits": { 00:16:04.978 "rw_ios_per_sec": 0, 00:16:04.978 "rw_mbytes_per_sec": 0, 00:16:04.978 "r_mbytes_per_sec": 0, 00:16:04.978 "w_mbytes_per_sec": 0 00:16:04.978 }, 00:16:04.978 "claimed": false, 00:16:04.978 "zoned": false, 00:16:04.978 "supported_io_types": { 00:16:04.978 "read": true, 00:16:04.978 "write": true, 00:16:04.978 "unmap": true, 00:16:04.978 "flush": true, 00:16:04.978 "reset": true, 00:16:04.978 "nvme_admin": false, 00:16:04.978 "nvme_io": false, 00:16:04.978 "nvme_io_md": false, 00:16:04.978 "write_zeroes": true, 00:16:04.978 "zcopy": false, 00:16:04.978 "get_zone_info": false, 00:16:04.978 "zone_management": false, 00:16:04.978 "zone_append": false, 00:16:04.978 "compare": false, 00:16:04.978 "compare_and_write": false, 00:16:04.978 "abort": false, 00:16:04.978 "seek_hole": false, 00:16:04.978 "seek_data": false, 00:16:04.978 "copy": false, 00:16:04.978 "nvme_iov_md": false 00:16:04.978 }, 00:16:04.978 "memory_domains": [ 00:16:04.978 { 00:16:04.978 "dma_device_id": "system", 00:16:04.978 "dma_device_type": 1 00:16:04.978 }, 00:16:04.978 { 00:16:04.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:04.978 "dma_device_type": 2 00:16:04.978 }, 00:16:04.978 { 00:16:04.978 "dma_device_id": "system", 00:16:04.978 "dma_device_type": 1 00:16:04.978 }, 00:16:04.978 { 00:16:04.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:04.978 "dma_device_type": 2 00:16:04.978 }, 00:16:04.978 { 00:16:04.978 "dma_device_id": "system", 00:16:04.978 "dma_device_type": 1 00:16:04.979 }, 00:16:04.979 { 00:16:04.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:04.979 "dma_device_type": 2 00:16:04.979 }, 00:16:04.979 { 00:16:04.979 "dma_device_id": "system", 00:16:04.979 "dma_device_type": 1 00:16:04.979 }, 00:16:04.979 { 00:16:04.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:04.979 "dma_device_type": 2 00:16:04.979 } 00:16:04.979 ], 00:16:04.979 "driver_specific": { 00:16:04.979 "raid": { 00:16:04.979 "uuid": "de8b2800-5e32-4868-9d71-554419fd336b", 00:16:04.979 "strip_size_kb": 64, 00:16:04.979 "state": "online", 00:16:04.979 "raid_level": "concat", 00:16:04.979 "superblock": true, 00:16:04.979 "num_base_bdevs": 4, 00:16:04.979 "num_base_bdevs_discovered": 4, 00:16:04.979 "num_base_bdevs_operational": 4, 00:16:04.979 "base_bdevs_list": [ 00:16:04.979 { 00:16:04.979 "name": "NewBaseBdev", 00:16:04.979 "uuid": "2b834779-4584-4c57-a150-ac4ef62a3308", 00:16:04.979 "is_configured": true, 00:16:04.979 "data_offset": 2048, 00:16:04.979 "data_size": 63488 00:16:04.979 }, 00:16:04.979 { 00:16:04.979 "name": "BaseBdev2", 00:16:04.979 "uuid": "b01b56c0-9b4e-4117-9158-bf378b85c3a9", 00:16:04.979 "is_configured": true, 00:16:04.979 "data_offset": 2048, 00:16:04.979 "data_size": 63488 00:16:04.979 }, 00:16:04.979 { 00:16:04.979 "name": "BaseBdev3", 00:16:04.979 "uuid": "3699438a-6b1d-4133-a029-966012c06102", 00:16:04.979 "is_configured": true, 00:16:04.979 "data_offset": 2048, 00:16:04.979 "data_size": 63488 00:16:04.979 }, 00:16:04.979 { 00:16:04.979 "name": "BaseBdev4", 00:16:04.979 "uuid": "de315c56-ddbe-4619-8a0b-35c7f0b6c21f", 00:16:04.979 "is_configured": true, 00:16:04.979 "data_offset": 2048, 00:16:04.979 "data_size": 63488 00:16:04.979 } 00:16:04.979 ] 00:16:04.979 } 00:16:04.979 } 00:16:04.979 }' 00:16:04.979 12:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:04.979 12:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:04.979 BaseBdev2 00:16:04.979 BaseBdev3 00:16:04.979 BaseBdev4' 00:16:04.979 12:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:04.979 12:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:04.979 12:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:04.979 12:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:04.979 12:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:04.979 12:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.979 12:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.979 12:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.979 12:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:04.979 12:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:04.979 12:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:04.979 12:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:04.979 12:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.979 12:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.979 12:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:04.979 12:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.979 12:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:04.979 12:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:04.979 12:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:04.979 12:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:04.979 12:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.979 12:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.979 12:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:04.979 12:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.239 12:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:05.239 12:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:05.239 12:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:05.239 12:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:05.239 12:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.239 12:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.239 12:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:05.239 12:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.239 12:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:05.239 12:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:05.239 12:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:05.239 12:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.239 12:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.239 [2024-11-25 12:15:01.144324] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:05.239 [2024-11-25 12:15:01.144376] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:05.239 [2024-11-25 12:15:01.144484] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:05.239 [2024-11-25 12:15:01.144586] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:05.239 [2024-11-25 12:15:01.144603] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:05.239 12:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.239 12:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72082 00:16:05.239 12:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 72082 ']' 00:16:05.239 12:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 72082 00:16:05.239 12:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:05.239 12:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:05.239 12:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72082 00:16:05.239 12:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:05.239 killing process with pid 72082 00:16:05.239 12:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:05.239 12:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72082' 00:16:05.239 12:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 72082 00:16:05.239 [2024-11-25 12:15:01.181516] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:05.239 12:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 72082 00:16:05.502 [2024-11-25 12:15:01.540769] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:06.880 12:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:16:06.880 00:16:06.880 real 0m12.668s 00:16:06.880 user 0m20.780s 00:16:06.880 sys 0m1.863s 00:16:06.880 12:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:06.880 ************************************ 00:16:06.880 END TEST raid_state_function_test_sb 00:16:06.880 ************************************ 00:16:06.880 12:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.880 12:15:02 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:16:06.880 12:15:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:06.880 12:15:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:06.880 12:15:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:06.880 ************************************ 00:16:06.880 START TEST raid_superblock_test 00:16:06.880 ************************************ 00:16:06.880 12:15:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:16:06.880 12:15:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:16:06.880 12:15:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:16:06.880 12:15:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:06.880 12:15:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:06.880 12:15:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:06.880 12:15:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:06.880 12:15:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:06.880 12:15:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:06.880 12:15:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:06.880 12:15:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:06.880 12:15:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:06.880 12:15:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:06.880 12:15:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:06.880 12:15:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:16:06.880 12:15:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:16:06.880 12:15:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:16:06.880 12:15:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72765 00:16:06.880 12:15:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:06.880 12:15:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72765 00:16:06.880 12:15:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72765 ']' 00:16:06.880 12:15:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:06.880 12:15:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:06.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:06.880 12:15:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:06.880 12:15:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:06.880 12:15:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.880 [2024-11-25 12:15:02.720975] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:16:06.880 [2024-11-25 12:15:02.721138] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72765 ] 00:16:06.880 [2024-11-25 12:15:02.896127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:07.139 [2024-11-25 12:15:03.026299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:07.397 [2024-11-25 12:15:03.228674] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:07.397 [2024-11-25 12:15:03.228758] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:07.965 12:15:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:07.965 12:15:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:16:07.965 12:15:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:07.965 12:15:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:07.965 12:15:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:07.965 12:15:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:07.965 12:15:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:07.965 12:15:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:07.965 12:15:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:07.965 12:15:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:07.965 12:15:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:16:07.965 12:15:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.965 12:15:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.965 malloc1 00:16:07.965 12:15:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.965 12:15:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:07.965 12:15:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.965 12:15:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.965 [2024-11-25 12:15:03.798196] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:07.965 [2024-11-25 12:15:03.798463] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:07.965 [2024-11-25 12:15:03.798641] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:07.965 [2024-11-25 12:15:03.798781] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:07.965 [2024-11-25 12:15:03.801694] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:07.965 [2024-11-25 12:15:03.801860] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:07.965 pt1 00:16:07.965 12:15:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.965 12:15:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:07.965 12:15:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:07.965 12:15:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:07.965 12:15:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:07.965 12:15:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:07.965 12:15:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:07.965 12:15:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:07.965 12:15:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:07.965 12:15:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:16:07.965 12:15:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.965 12:15:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.965 malloc2 00:16:07.965 12:15:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.965 12:15:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:07.965 12:15:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.965 12:15:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.965 [2024-11-25 12:15:03.854441] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:07.966 [2024-11-25 12:15:03.854643] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:07.966 [2024-11-25 12:15:03.854721] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:07.966 [2024-11-25 12:15:03.854869] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:07.966 [2024-11-25 12:15:03.857755] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:07.966 [2024-11-25 12:15:03.857914] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:07.966 pt2 00:16:07.966 12:15:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.966 12:15:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:07.966 12:15:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:07.966 12:15:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:16:07.966 12:15:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:16:07.966 12:15:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:07.966 12:15:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:07.966 12:15:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:07.966 12:15:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:07.966 12:15:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:16:07.966 12:15:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.966 12:15:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.966 malloc3 00:16:07.966 12:15:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.966 12:15:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:07.966 12:15:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.966 12:15:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.966 [2024-11-25 12:15:03.926413] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:07.966 [2024-11-25 12:15:03.926642] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:07.966 [2024-11-25 12:15:03.926712] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:07.966 [2024-11-25 12:15:03.926732] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:07.966 [2024-11-25 12:15:03.930190] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:07.966 pt3 00:16:07.966 [2024-11-25 12:15:03.930425] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:07.966 12:15:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.966 12:15:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:07.966 12:15:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:07.966 12:15:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:16:07.966 12:15:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:16:07.966 12:15:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:16:07.966 12:15:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:07.966 12:15:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:07.966 12:15:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:07.966 12:15:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:16:07.966 12:15:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.966 12:15:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.966 malloc4 00:16:07.966 12:15:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.966 12:15:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:07.966 12:15:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.966 12:15:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.966 [2024-11-25 12:15:03.986118] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:07.966 [2024-11-25 12:15:03.986368] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:07.966 [2024-11-25 12:15:03.986424] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:07.966 [2024-11-25 12:15:03.986454] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:07.966 [2024-11-25 12:15:03.989874] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:07.966 [2024-11-25 12:15:03.989928] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:07.966 pt4 00:16:07.966 12:15:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.966 12:15:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:07.966 12:15:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:07.966 12:15:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:16:07.966 12:15:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.966 12:15:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.966 [2024-11-25 12:15:03.998235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:07.966 [2024-11-25 12:15:04.001373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:07.966 [2024-11-25 12:15:04.001492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:07.966 [2024-11-25 12:15:04.001608] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:07.966 [2024-11-25 12:15:04.001947] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:07.966 [2024-11-25 12:15:04.001970] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:07.966 [2024-11-25 12:15:04.002452] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:07.966 [2024-11-25 12:15:04.002731] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:07.966 [2024-11-25 12:15:04.002756] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:07.966 [2024-11-25 12:15:04.003068] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:07.966 12:15:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.966 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:16:07.966 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:07.966 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:07.966 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:07.966 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:07.966 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:07.966 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.966 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.966 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.966 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.966 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.966 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.966 12:15:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.966 12:15:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.966 12:15:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.966 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.966 "name": "raid_bdev1", 00:16:07.966 "uuid": "a3c2dcb8-4734-460d-bf05-163bcd71513f", 00:16:07.966 "strip_size_kb": 64, 00:16:07.966 "state": "online", 00:16:07.966 "raid_level": "concat", 00:16:07.966 "superblock": true, 00:16:07.967 "num_base_bdevs": 4, 00:16:07.967 "num_base_bdevs_discovered": 4, 00:16:07.967 "num_base_bdevs_operational": 4, 00:16:07.967 "base_bdevs_list": [ 00:16:07.967 { 00:16:07.967 "name": "pt1", 00:16:07.967 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:07.967 "is_configured": true, 00:16:07.967 "data_offset": 2048, 00:16:07.967 "data_size": 63488 00:16:07.967 }, 00:16:07.967 { 00:16:07.967 "name": "pt2", 00:16:07.967 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:07.967 "is_configured": true, 00:16:07.967 "data_offset": 2048, 00:16:07.967 "data_size": 63488 00:16:07.967 }, 00:16:07.967 { 00:16:07.967 "name": "pt3", 00:16:07.967 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:07.967 "is_configured": true, 00:16:07.967 "data_offset": 2048, 00:16:07.967 "data_size": 63488 00:16:07.967 }, 00:16:07.967 { 00:16:07.967 "name": "pt4", 00:16:07.967 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:07.967 "is_configured": true, 00:16:07.967 "data_offset": 2048, 00:16:07.967 "data_size": 63488 00:16:07.967 } 00:16:07.967 ] 00:16:07.967 }' 00:16:07.967 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.967 12:15:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.533 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:08.533 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:08.533 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:08.533 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:08.533 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:08.533 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:08.533 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:08.533 12:15:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.533 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:08.533 12:15:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.533 [2024-11-25 12:15:04.447713] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:08.533 12:15:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.533 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:08.533 "name": "raid_bdev1", 00:16:08.533 "aliases": [ 00:16:08.533 "a3c2dcb8-4734-460d-bf05-163bcd71513f" 00:16:08.533 ], 00:16:08.533 "product_name": "Raid Volume", 00:16:08.533 "block_size": 512, 00:16:08.533 "num_blocks": 253952, 00:16:08.533 "uuid": "a3c2dcb8-4734-460d-bf05-163bcd71513f", 00:16:08.533 "assigned_rate_limits": { 00:16:08.533 "rw_ios_per_sec": 0, 00:16:08.533 "rw_mbytes_per_sec": 0, 00:16:08.533 "r_mbytes_per_sec": 0, 00:16:08.533 "w_mbytes_per_sec": 0 00:16:08.533 }, 00:16:08.533 "claimed": false, 00:16:08.533 "zoned": false, 00:16:08.533 "supported_io_types": { 00:16:08.533 "read": true, 00:16:08.533 "write": true, 00:16:08.533 "unmap": true, 00:16:08.533 "flush": true, 00:16:08.533 "reset": true, 00:16:08.533 "nvme_admin": false, 00:16:08.533 "nvme_io": false, 00:16:08.533 "nvme_io_md": false, 00:16:08.533 "write_zeroes": true, 00:16:08.533 "zcopy": false, 00:16:08.533 "get_zone_info": false, 00:16:08.533 "zone_management": false, 00:16:08.533 "zone_append": false, 00:16:08.533 "compare": false, 00:16:08.533 "compare_and_write": false, 00:16:08.533 "abort": false, 00:16:08.533 "seek_hole": false, 00:16:08.533 "seek_data": false, 00:16:08.533 "copy": false, 00:16:08.533 "nvme_iov_md": false 00:16:08.533 }, 00:16:08.533 "memory_domains": [ 00:16:08.533 { 00:16:08.533 "dma_device_id": "system", 00:16:08.533 "dma_device_type": 1 00:16:08.533 }, 00:16:08.533 { 00:16:08.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.533 "dma_device_type": 2 00:16:08.533 }, 00:16:08.533 { 00:16:08.533 "dma_device_id": "system", 00:16:08.533 "dma_device_type": 1 00:16:08.533 }, 00:16:08.533 { 00:16:08.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.533 "dma_device_type": 2 00:16:08.533 }, 00:16:08.533 { 00:16:08.533 "dma_device_id": "system", 00:16:08.533 "dma_device_type": 1 00:16:08.533 }, 00:16:08.533 { 00:16:08.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.533 "dma_device_type": 2 00:16:08.533 }, 00:16:08.533 { 00:16:08.533 "dma_device_id": "system", 00:16:08.533 "dma_device_type": 1 00:16:08.533 }, 00:16:08.533 { 00:16:08.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.533 "dma_device_type": 2 00:16:08.533 } 00:16:08.533 ], 00:16:08.533 "driver_specific": { 00:16:08.533 "raid": { 00:16:08.533 "uuid": "a3c2dcb8-4734-460d-bf05-163bcd71513f", 00:16:08.533 "strip_size_kb": 64, 00:16:08.533 "state": "online", 00:16:08.533 "raid_level": "concat", 00:16:08.533 "superblock": true, 00:16:08.533 "num_base_bdevs": 4, 00:16:08.533 "num_base_bdevs_discovered": 4, 00:16:08.533 "num_base_bdevs_operational": 4, 00:16:08.533 "base_bdevs_list": [ 00:16:08.533 { 00:16:08.533 "name": "pt1", 00:16:08.533 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:08.533 "is_configured": true, 00:16:08.533 "data_offset": 2048, 00:16:08.533 "data_size": 63488 00:16:08.533 }, 00:16:08.533 { 00:16:08.533 "name": "pt2", 00:16:08.533 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:08.533 "is_configured": true, 00:16:08.533 "data_offset": 2048, 00:16:08.533 "data_size": 63488 00:16:08.533 }, 00:16:08.533 { 00:16:08.533 "name": "pt3", 00:16:08.533 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:08.533 "is_configured": true, 00:16:08.533 "data_offset": 2048, 00:16:08.533 "data_size": 63488 00:16:08.533 }, 00:16:08.533 { 00:16:08.533 "name": "pt4", 00:16:08.533 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:08.533 "is_configured": true, 00:16:08.533 "data_offset": 2048, 00:16:08.533 "data_size": 63488 00:16:08.533 } 00:16:08.533 ] 00:16:08.533 } 00:16:08.533 } 00:16:08.533 }' 00:16:08.534 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:08.534 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:08.534 pt2 00:16:08.534 pt3 00:16:08.534 pt4' 00:16:08.534 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:08.534 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:08.534 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:08.534 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:08.534 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:08.534 12:15:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.534 12:15:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.534 12:15:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.534 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:08.534 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:08.534 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:08.792 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:08.792 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:08.792 12:15:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.792 12:15:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.792 12:15:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.792 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:08.792 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:08.792 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:08.792 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:08.792 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:08.792 12:15:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.792 12:15:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.792 12:15:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.792 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:08.792 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:08.792 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:08.792 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:08.792 12:15:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.792 12:15:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.792 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:08.792 12:15:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.792 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:08.792 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:08.792 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:08.792 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:08.792 12:15:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.792 12:15:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.792 [2024-11-25 12:15:04.767713] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:08.792 12:15:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.792 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a3c2dcb8-4734-460d-bf05-163bcd71513f 00:16:08.792 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z a3c2dcb8-4734-460d-bf05-163bcd71513f ']' 00:16:08.792 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:08.792 12:15:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.792 12:15:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.792 [2024-11-25 12:15:04.815312] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:08.792 [2024-11-25 12:15:04.815420] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:08.792 [2024-11-25 12:15:04.815593] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:08.792 [2024-11-25 12:15:04.815715] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:08.792 [2024-11-25 12:15:04.815745] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:08.792 12:15:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.792 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.792 12:15:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.792 12:15:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.792 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:08.792 12:15:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.792 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:08.792 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:08.792 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:08.792 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:08.792 12:15:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.792 12:15:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.792 12:15:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.792 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:08.792 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:08.792 12:15:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.792 12:15:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.052 12:15:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.052 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:09.052 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:16:09.053 12:15:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.053 12:15:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.053 12:15:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.053 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:09.053 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:16:09.053 12:15:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.053 12:15:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.053 12:15:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.053 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:09.053 12:15:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.053 12:15:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.053 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:09.053 12:15:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.053 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:09.053 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:09.053 12:15:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:16:09.053 12:15:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:09.053 12:15:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:09.053 12:15:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:09.053 12:15:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:09.053 12:15:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:09.053 12:15:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:09.053 12:15:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.053 12:15:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.053 [2024-11-25 12:15:04.959398] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:09.053 [2024-11-25 12:15:04.962247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:09.053 [2024-11-25 12:15:04.962374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:09.053 [2024-11-25 12:15:04.962455] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:16:09.053 [2024-11-25 12:15:04.962563] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:09.053 [2024-11-25 12:15:04.962670] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:09.053 [2024-11-25 12:15:04.962713] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:09.053 [2024-11-25 12:15:04.962756] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:16:09.053 [2024-11-25 12:15:04.962786] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:09.053 [2024-11-25 12:15:04.962807] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:09.053 request: 00:16:09.053 { 00:16:09.053 "name": "raid_bdev1", 00:16:09.053 "raid_level": "concat", 00:16:09.053 "base_bdevs": [ 00:16:09.053 "malloc1", 00:16:09.053 "malloc2", 00:16:09.053 "malloc3", 00:16:09.053 "malloc4" 00:16:09.053 ], 00:16:09.053 "strip_size_kb": 64, 00:16:09.053 "superblock": false, 00:16:09.053 "method": "bdev_raid_create", 00:16:09.053 "req_id": 1 00:16:09.053 } 00:16:09.053 Got JSON-RPC error response 00:16:09.053 response: 00:16:09.053 { 00:16:09.053 "code": -17, 00:16:09.053 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:09.053 } 00:16:09.053 12:15:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:09.053 12:15:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:16:09.053 12:15:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:09.053 12:15:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:09.053 12:15:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:09.053 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.053 12:15:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:09.053 12:15:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.053 12:15:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.053 12:15:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.053 12:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:09.053 12:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:09.053 12:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:09.053 12:15:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.053 12:15:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.053 [2024-11-25 12:15:05.019540] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:09.053 [2024-11-25 12:15:05.019974] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:09.053 [2024-11-25 12:15:05.020140] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:09.053 [2024-11-25 12:15:05.020276] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:09.053 [2024-11-25 12:15:05.023711] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:09.053 [2024-11-25 12:15:05.023896] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:09.053 [2024-11-25 12:15:05.024206] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:09.053 [2024-11-25 12:15:05.024469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:09.053 pt1 00:16:09.053 12:15:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.053 12:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:16:09.053 12:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:09.053 12:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:09.053 12:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:09.053 12:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:09.053 12:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:09.053 12:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.053 12:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.053 12:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.053 12:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.053 12:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.053 12:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.053 12:15:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.053 12:15:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.053 12:15:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.053 12:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.053 "name": "raid_bdev1", 00:16:09.053 "uuid": "a3c2dcb8-4734-460d-bf05-163bcd71513f", 00:16:09.053 "strip_size_kb": 64, 00:16:09.053 "state": "configuring", 00:16:09.053 "raid_level": "concat", 00:16:09.053 "superblock": true, 00:16:09.053 "num_base_bdevs": 4, 00:16:09.053 "num_base_bdevs_discovered": 1, 00:16:09.053 "num_base_bdevs_operational": 4, 00:16:09.053 "base_bdevs_list": [ 00:16:09.053 { 00:16:09.053 "name": "pt1", 00:16:09.053 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:09.053 "is_configured": true, 00:16:09.053 "data_offset": 2048, 00:16:09.053 "data_size": 63488 00:16:09.053 }, 00:16:09.053 { 00:16:09.053 "name": null, 00:16:09.053 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:09.053 "is_configured": false, 00:16:09.053 "data_offset": 2048, 00:16:09.053 "data_size": 63488 00:16:09.053 }, 00:16:09.053 { 00:16:09.053 "name": null, 00:16:09.053 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:09.053 "is_configured": false, 00:16:09.053 "data_offset": 2048, 00:16:09.053 "data_size": 63488 00:16:09.053 }, 00:16:09.053 { 00:16:09.053 "name": null, 00:16:09.053 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:09.053 "is_configured": false, 00:16:09.053 "data_offset": 2048, 00:16:09.053 "data_size": 63488 00:16:09.053 } 00:16:09.053 ] 00:16:09.053 }' 00:16:09.053 12:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.053 12:15:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.621 12:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:16:09.621 12:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:09.621 12:15:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.621 12:15:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.621 [2024-11-25 12:15:05.544537] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:09.621 [2024-11-25 12:15:05.544698] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:09.621 [2024-11-25 12:15:05.544739] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:09.621 [2024-11-25 12:15:05.544763] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:09.621 [2024-11-25 12:15:05.545473] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:09.621 [2024-11-25 12:15:05.545794] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:09.621 [2024-11-25 12:15:05.545950] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:09.621 [2024-11-25 12:15:05.546000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:09.621 pt2 00:16:09.621 12:15:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.621 12:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:16:09.621 12:15:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.621 12:15:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.621 [2024-11-25 12:15:05.552463] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:09.621 12:15:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.621 12:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:16:09.621 12:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:09.621 12:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:09.621 12:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:09.621 12:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:09.621 12:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:09.621 12:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.621 12:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.621 12:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.621 12:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.621 12:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.621 12:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.621 12:15:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.621 12:15:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.621 12:15:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.621 12:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.621 "name": "raid_bdev1", 00:16:09.621 "uuid": "a3c2dcb8-4734-460d-bf05-163bcd71513f", 00:16:09.621 "strip_size_kb": 64, 00:16:09.621 "state": "configuring", 00:16:09.621 "raid_level": "concat", 00:16:09.621 "superblock": true, 00:16:09.621 "num_base_bdevs": 4, 00:16:09.621 "num_base_bdevs_discovered": 1, 00:16:09.621 "num_base_bdevs_operational": 4, 00:16:09.621 "base_bdevs_list": [ 00:16:09.621 { 00:16:09.621 "name": "pt1", 00:16:09.621 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:09.621 "is_configured": true, 00:16:09.621 "data_offset": 2048, 00:16:09.621 "data_size": 63488 00:16:09.621 }, 00:16:09.621 { 00:16:09.621 "name": null, 00:16:09.621 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:09.621 "is_configured": false, 00:16:09.621 "data_offset": 0, 00:16:09.621 "data_size": 63488 00:16:09.621 }, 00:16:09.621 { 00:16:09.621 "name": null, 00:16:09.621 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:09.621 "is_configured": false, 00:16:09.621 "data_offset": 2048, 00:16:09.621 "data_size": 63488 00:16:09.621 }, 00:16:09.621 { 00:16:09.621 "name": null, 00:16:09.621 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:09.621 "is_configured": false, 00:16:09.621 "data_offset": 2048, 00:16:09.621 "data_size": 63488 00:16:09.621 } 00:16:09.621 ] 00:16:09.621 }' 00:16:09.621 12:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.621 12:15:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.189 12:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:10.189 12:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:10.189 12:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:10.189 12:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.189 12:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.189 [2024-11-25 12:15:06.048688] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:10.189 [2024-11-25 12:15:06.048838] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:10.189 [2024-11-25 12:15:06.048881] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:10.189 [2024-11-25 12:15:06.048901] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:10.189 [2024-11-25 12:15:06.049654] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:10.189 [2024-11-25 12:15:06.049693] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:10.189 [2024-11-25 12:15:06.049829] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:10.189 [2024-11-25 12:15:06.049881] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:10.189 pt2 00:16:10.189 12:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.189 12:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:10.189 12:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:10.189 12:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:10.189 12:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.189 12:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.189 [2024-11-25 12:15:06.056591] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:10.189 [2024-11-25 12:15:06.056665] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:10.189 [2024-11-25 12:15:06.056701] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:10.189 [2024-11-25 12:15:06.056718] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:10.189 [2024-11-25 12:15:06.057246] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:10.189 [2024-11-25 12:15:06.057295] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:10.189 [2024-11-25 12:15:06.057417] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:10.189 [2024-11-25 12:15:06.057452] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:10.189 pt3 00:16:10.189 12:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.189 12:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:10.189 12:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:10.189 12:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:10.189 12:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.189 12:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.189 [2024-11-25 12:15:06.064535] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:10.189 [2024-11-25 12:15:06.064607] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:10.189 [2024-11-25 12:15:06.064642] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:10.189 [2024-11-25 12:15:06.064661] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:10.189 [2024-11-25 12:15:06.065181] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:10.189 [2024-11-25 12:15:06.065228] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:10.189 [2024-11-25 12:15:06.065324] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:10.189 [2024-11-25 12:15:06.065383] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:10.189 [2024-11-25 12:15:06.065577] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:10.189 [2024-11-25 12:15:06.065597] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:10.189 [2024-11-25 12:15:06.065937] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:10.189 [2024-11-25 12:15:06.066181] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:10.189 [2024-11-25 12:15:06.066209] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:10.189 [2024-11-25 12:15:06.066416] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:10.189 pt4 00:16:10.189 12:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.189 12:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:10.189 12:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:10.189 12:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:16:10.189 12:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:10.189 12:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:10.189 12:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:10.189 12:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:10.189 12:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:10.189 12:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.189 12:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.189 12:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.189 12:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.189 12:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.189 12:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.189 12:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.189 12:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.190 12:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.190 12:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.190 "name": "raid_bdev1", 00:16:10.190 "uuid": "a3c2dcb8-4734-460d-bf05-163bcd71513f", 00:16:10.190 "strip_size_kb": 64, 00:16:10.190 "state": "online", 00:16:10.190 "raid_level": "concat", 00:16:10.190 "superblock": true, 00:16:10.190 "num_base_bdevs": 4, 00:16:10.190 "num_base_bdevs_discovered": 4, 00:16:10.190 "num_base_bdevs_operational": 4, 00:16:10.190 "base_bdevs_list": [ 00:16:10.190 { 00:16:10.190 "name": "pt1", 00:16:10.190 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:10.190 "is_configured": true, 00:16:10.190 "data_offset": 2048, 00:16:10.190 "data_size": 63488 00:16:10.190 }, 00:16:10.190 { 00:16:10.190 "name": "pt2", 00:16:10.190 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:10.190 "is_configured": true, 00:16:10.190 "data_offset": 2048, 00:16:10.190 "data_size": 63488 00:16:10.190 }, 00:16:10.190 { 00:16:10.190 "name": "pt3", 00:16:10.190 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:10.190 "is_configured": true, 00:16:10.190 "data_offset": 2048, 00:16:10.190 "data_size": 63488 00:16:10.190 }, 00:16:10.190 { 00:16:10.190 "name": "pt4", 00:16:10.190 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:10.190 "is_configured": true, 00:16:10.190 "data_offset": 2048, 00:16:10.190 "data_size": 63488 00:16:10.190 } 00:16:10.190 ] 00:16:10.190 }' 00:16:10.190 12:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.190 12:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.758 12:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:10.758 12:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:10.758 12:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:10.758 12:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:10.758 12:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:10.758 12:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:10.758 12:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:10.758 12:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.758 12:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.758 12:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:10.758 [2024-11-25 12:15:06.589260] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:10.758 12:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.758 12:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:10.758 "name": "raid_bdev1", 00:16:10.758 "aliases": [ 00:16:10.758 "a3c2dcb8-4734-460d-bf05-163bcd71513f" 00:16:10.758 ], 00:16:10.758 "product_name": "Raid Volume", 00:16:10.758 "block_size": 512, 00:16:10.758 "num_blocks": 253952, 00:16:10.758 "uuid": "a3c2dcb8-4734-460d-bf05-163bcd71513f", 00:16:10.758 "assigned_rate_limits": { 00:16:10.758 "rw_ios_per_sec": 0, 00:16:10.758 "rw_mbytes_per_sec": 0, 00:16:10.758 "r_mbytes_per_sec": 0, 00:16:10.758 "w_mbytes_per_sec": 0 00:16:10.758 }, 00:16:10.758 "claimed": false, 00:16:10.758 "zoned": false, 00:16:10.758 "supported_io_types": { 00:16:10.758 "read": true, 00:16:10.758 "write": true, 00:16:10.758 "unmap": true, 00:16:10.758 "flush": true, 00:16:10.758 "reset": true, 00:16:10.758 "nvme_admin": false, 00:16:10.758 "nvme_io": false, 00:16:10.758 "nvme_io_md": false, 00:16:10.758 "write_zeroes": true, 00:16:10.758 "zcopy": false, 00:16:10.758 "get_zone_info": false, 00:16:10.759 "zone_management": false, 00:16:10.759 "zone_append": false, 00:16:10.759 "compare": false, 00:16:10.759 "compare_and_write": false, 00:16:10.759 "abort": false, 00:16:10.759 "seek_hole": false, 00:16:10.759 "seek_data": false, 00:16:10.759 "copy": false, 00:16:10.759 "nvme_iov_md": false 00:16:10.759 }, 00:16:10.759 "memory_domains": [ 00:16:10.759 { 00:16:10.759 "dma_device_id": "system", 00:16:10.759 "dma_device_type": 1 00:16:10.759 }, 00:16:10.759 { 00:16:10.759 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:10.759 "dma_device_type": 2 00:16:10.759 }, 00:16:10.759 { 00:16:10.759 "dma_device_id": "system", 00:16:10.759 "dma_device_type": 1 00:16:10.759 }, 00:16:10.759 { 00:16:10.759 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:10.759 "dma_device_type": 2 00:16:10.759 }, 00:16:10.759 { 00:16:10.759 "dma_device_id": "system", 00:16:10.759 "dma_device_type": 1 00:16:10.759 }, 00:16:10.759 { 00:16:10.759 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:10.759 "dma_device_type": 2 00:16:10.759 }, 00:16:10.759 { 00:16:10.759 "dma_device_id": "system", 00:16:10.759 "dma_device_type": 1 00:16:10.759 }, 00:16:10.759 { 00:16:10.759 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:10.759 "dma_device_type": 2 00:16:10.759 } 00:16:10.759 ], 00:16:10.759 "driver_specific": { 00:16:10.759 "raid": { 00:16:10.759 "uuid": "a3c2dcb8-4734-460d-bf05-163bcd71513f", 00:16:10.759 "strip_size_kb": 64, 00:16:10.759 "state": "online", 00:16:10.759 "raid_level": "concat", 00:16:10.759 "superblock": true, 00:16:10.759 "num_base_bdevs": 4, 00:16:10.759 "num_base_bdevs_discovered": 4, 00:16:10.759 "num_base_bdevs_operational": 4, 00:16:10.759 "base_bdevs_list": [ 00:16:10.759 { 00:16:10.759 "name": "pt1", 00:16:10.759 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:10.759 "is_configured": true, 00:16:10.759 "data_offset": 2048, 00:16:10.759 "data_size": 63488 00:16:10.759 }, 00:16:10.759 { 00:16:10.759 "name": "pt2", 00:16:10.759 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:10.759 "is_configured": true, 00:16:10.759 "data_offset": 2048, 00:16:10.759 "data_size": 63488 00:16:10.759 }, 00:16:10.759 { 00:16:10.759 "name": "pt3", 00:16:10.759 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:10.759 "is_configured": true, 00:16:10.759 "data_offset": 2048, 00:16:10.759 "data_size": 63488 00:16:10.759 }, 00:16:10.759 { 00:16:10.759 "name": "pt4", 00:16:10.759 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:10.759 "is_configured": true, 00:16:10.759 "data_offset": 2048, 00:16:10.759 "data_size": 63488 00:16:10.759 } 00:16:10.759 ] 00:16:10.759 } 00:16:10.759 } 00:16:10.759 }' 00:16:10.759 12:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:10.759 12:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:10.759 pt2 00:16:10.759 pt3 00:16:10.759 pt4' 00:16:10.759 12:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:10.759 12:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:10.759 12:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:10.759 12:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:10.759 12:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:10.759 12:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.759 12:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.759 12:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.759 12:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:10.759 12:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:10.759 12:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:10.759 12:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:10.759 12:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.759 12:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.759 12:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:10.759 12:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.759 12:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:10.759 12:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:10.759 12:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:10.759 12:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:10.759 12:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.759 12:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.759 12:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:10.759 12:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.018 12:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:11.018 12:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:11.018 12:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:11.018 12:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:11.018 12:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.018 12:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.018 12:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:11.018 12:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.018 12:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:11.018 12:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:11.018 12:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:11.018 12:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:11.018 12:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.018 12:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.018 [2024-11-25 12:15:06.929316] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:11.018 12:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.018 12:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' a3c2dcb8-4734-460d-bf05-163bcd71513f '!=' a3c2dcb8-4734-460d-bf05-163bcd71513f ']' 00:16:11.018 12:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:16:11.018 12:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:11.018 12:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:16:11.018 12:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72765 00:16:11.018 12:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72765 ']' 00:16:11.018 12:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72765 00:16:11.018 12:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:16:11.018 12:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:11.018 12:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72765 00:16:11.018 killing process with pid 72765 00:16:11.018 12:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:11.018 12:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:11.018 12:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72765' 00:16:11.018 12:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72765 00:16:11.018 12:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72765 00:16:11.018 [2024-11-25 12:15:07.014729] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:11.018 [2024-11-25 12:15:07.014902] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:11.018 [2024-11-25 12:15:07.015033] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:11.018 [2024-11-25 12:15:07.015053] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:11.611 [2024-11-25 12:15:07.408902] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:12.569 ************************************ 00:16:12.569 END TEST raid_superblock_test 00:16:12.569 12:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:12.569 00:16:12.569 real 0m5.871s 00:16:12.569 user 0m8.687s 00:16:12.569 sys 0m0.855s 00:16:12.569 12:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:12.569 12:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.569 ************************************ 00:16:12.569 12:15:08 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:16:12.569 12:15:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:12.569 12:15:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:12.569 12:15:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:12.569 ************************************ 00:16:12.569 START TEST raid_read_error_test 00:16:12.569 ************************************ 00:16:12.569 12:15:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:16:12.569 12:15:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:16:12.569 12:15:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:16:12.569 12:15:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:16:12.569 12:15:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:16:12.569 12:15:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:12.569 12:15:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:16:12.569 12:15:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:12.569 12:15:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:12.569 12:15:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:16:12.569 12:15:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:12.570 12:15:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:12.570 12:15:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:16:12.570 12:15:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:12.570 12:15:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:12.570 12:15:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:16:12.570 12:15:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:12.570 12:15:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:12.570 12:15:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:12.570 12:15:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:16:12.570 12:15:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:16:12.570 12:15:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:16:12.570 12:15:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:16:12.570 12:15:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:16:12.570 12:15:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:16:12.570 12:15:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:16:12.570 12:15:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:16:12.570 12:15:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:16:12.570 12:15:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:16:12.570 12:15:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.a4TtTgb860 00:16:12.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:12.570 12:15:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73026 00:16:12.570 12:15:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73026 00:16:12.570 12:15:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 73026 ']' 00:16:12.570 12:15:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:12.570 12:15:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:12.570 12:15:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:12.570 12:15:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:12.570 12:15:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.570 12:15:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:12.828 [2024-11-25 12:15:08.668173] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:16:12.829 [2024-11-25 12:15:08.668385] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73026 ] 00:16:12.829 [2024-11-25 12:15:08.857129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:13.087 [2024-11-25 12:15:09.011328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:13.345 [2024-11-25 12:15:09.232672] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:13.345 [2024-11-25 12:15:09.232963] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:13.604 12:15:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:13.604 12:15:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:16:13.604 12:15:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:13.604 12:15:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:13.604 12:15:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.604 12:15:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.862 BaseBdev1_malloc 00:16:13.862 12:15:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.862 12:15:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:16:13.862 12:15:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.862 12:15:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.862 true 00:16:13.862 12:15:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.862 12:15:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:13.862 12:15:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.862 12:15:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.862 [2024-11-25 12:15:09.737226] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:13.862 [2024-11-25 12:15:09.737296] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:13.862 [2024-11-25 12:15:09.737332] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:16:13.862 [2024-11-25 12:15:09.737367] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:13.862 [2024-11-25 12:15:09.740155] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:13.862 [2024-11-25 12:15:09.740208] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:13.862 BaseBdev1 00:16:13.862 12:15:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.862 12:15:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:13.862 12:15:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:13.862 12:15:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.862 12:15:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.862 BaseBdev2_malloc 00:16:13.862 12:15:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.862 12:15:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:16:13.862 12:15:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.862 12:15:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.862 true 00:16:13.862 12:15:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.862 12:15:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:13.862 12:15:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.862 12:15:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.862 [2024-11-25 12:15:09.793048] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:13.862 [2024-11-25 12:15:09.793121] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:13.862 [2024-11-25 12:15:09.793154] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:13.862 [2024-11-25 12:15:09.793173] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:13.862 [2024-11-25 12:15:09.796004] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:13.862 [2024-11-25 12:15:09.796054] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:13.862 BaseBdev2 00:16:13.862 12:15:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.862 12:15:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:13.862 12:15:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:13.862 12:15:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.862 12:15:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.862 BaseBdev3_malloc 00:16:13.862 12:15:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.862 12:15:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:16:13.862 12:15:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.862 12:15:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.862 true 00:16:13.862 12:15:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.862 12:15:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:13.862 12:15:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.863 12:15:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.863 [2024-11-25 12:15:09.862411] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:13.863 [2024-11-25 12:15:09.862480] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:13.863 [2024-11-25 12:15:09.862514] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:13.863 [2024-11-25 12:15:09.862532] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:13.863 [2024-11-25 12:15:09.865275] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:13.863 [2024-11-25 12:15:09.865326] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:13.863 BaseBdev3 00:16:13.863 12:15:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.863 12:15:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:13.863 12:15:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:13.863 12:15:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.863 12:15:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.863 BaseBdev4_malloc 00:16:13.863 12:15:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.863 12:15:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:16:13.863 12:15:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.863 12:15:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.863 true 00:16:13.863 12:15:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.863 12:15:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:16:13.863 12:15:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.863 12:15:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.863 [2024-11-25 12:15:09.918329] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:16:13.863 [2024-11-25 12:15:09.918415] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:13.863 [2024-11-25 12:15:09.918446] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:13.863 [2024-11-25 12:15:09.918465] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:13.863 [2024-11-25 12:15:09.921276] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:13.863 [2024-11-25 12:15:09.921502] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:13.863 BaseBdev4 00:16:13.863 12:15:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.863 12:15:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:16:13.863 12:15:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.863 12:15:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.863 [2024-11-25 12:15:09.926423] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:13.863 [2024-11-25 12:15:09.928855] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:13.863 [2024-11-25 12:15:09.928970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:13.863 [2024-11-25 12:15:09.929089] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:13.863 [2024-11-25 12:15:09.929457] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:16:13.863 [2024-11-25 12:15:09.929487] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:13.863 [2024-11-25 12:15:09.929810] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:16:13.863 [2024-11-25 12:15:09.930029] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:16:13.863 [2024-11-25 12:15:09.930068] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:16:13.863 [2024-11-25 12:15:09.930359] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:13.863 12:15:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.863 12:15:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:16:13.863 12:15:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:13.863 12:15:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:13.863 12:15:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:13.863 12:15:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:13.863 12:15:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:13.863 12:15:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:13.863 12:15:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:13.863 12:15:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:13.863 12:15:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:13.863 12:15:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.863 12:15:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.863 12:15:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.863 12:15:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.863 12:15:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.122 12:15:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.122 "name": "raid_bdev1", 00:16:14.122 "uuid": "d1e54d81-2bc5-4dc6-8a8e-bd300ea8aa71", 00:16:14.122 "strip_size_kb": 64, 00:16:14.122 "state": "online", 00:16:14.122 "raid_level": "concat", 00:16:14.122 "superblock": true, 00:16:14.122 "num_base_bdevs": 4, 00:16:14.122 "num_base_bdevs_discovered": 4, 00:16:14.122 "num_base_bdevs_operational": 4, 00:16:14.122 "base_bdevs_list": [ 00:16:14.122 { 00:16:14.122 "name": "BaseBdev1", 00:16:14.122 "uuid": "40ce10a9-f275-56c9-9c63-68beeb72a8f9", 00:16:14.122 "is_configured": true, 00:16:14.122 "data_offset": 2048, 00:16:14.122 "data_size": 63488 00:16:14.122 }, 00:16:14.122 { 00:16:14.122 "name": "BaseBdev2", 00:16:14.122 "uuid": "0771efc8-59d6-56ed-a453-80af47f407b6", 00:16:14.122 "is_configured": true, 00:16:14.122 "data_offset": 2048, 00:16:14.122 "data_size": 63488 00:16:14.122 }, 00:16:14.122 { 00:16:14.122 "name": "BaseBdev3", 00:16:14.122 "uuid": "f415be5f-fc9d-5338-8924-264ab8b30644", 00:16:14.122 "is_configured": true, 00:16:14.122 "data_offset": 2048, 00:16:14.122 "data_size": 63488 00:16:14.122 }, 00:16:14.122 { 00:16:14.122 "name": "BaseBdev4", 00:16:14.122 "uuid": "393dfffb-d820-5cff-a011-718ac9e8995d", 00:16:14.122 "is_configured": true, 00:16:14.122 "data_offset": 2048, 00:16:14.122 "data_size": 63488 00:16:14.122 } 00:16:14.122 ] 00:16:14.122 }' 00:16:14.122 12:15:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.122 12:15:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.381 12:15:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:16:14.381 12:15:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:14.639 [2024-11-25 12:15:10.571938] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:16:15.575 12:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:16:15.575 12:15:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.575 12:15:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.575 12:15:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.575 12:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:16:15.575 12:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:16:15.575 12:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:16:15.575 12:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:16:15.575 12:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:15.575 12:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:15.575 12:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:15.575 12:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:15.575 12:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:15.575 12:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:15.575 12:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:15.575 12:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:15.575 12:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:15.575 12:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.575 12:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.575 12:15:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.575 12:15:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.575 12:15:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.575 12:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:15.575 "name": "raid_bdev1", 00:16:15.575 "uuid": "d1e54d81-2bc5-4dc6-8a8e-bd300ea8aa71", 00:16:15.575 "strip_size_kb": 64, 00:16:15.575 "state": "online", 00:16:15.575 "raid_level": "concat", 00:16:15.575 "superblock": true, 00:16:15.575 "num_base_bdevs": 4, 00:16:15.575 "num_base_bdevs_discovered": 4, 00:16:15.575 "num_base_bdevs_operational": 4, 00:16:15.575 "base_bdevs_list": [ 00:16:15.575 { 00:16:15.575 "name": "BaseBdev1", 00:16:15.575 "uuid": "40ce10a9-f275-56c9-9c63-68beeb72a8f9", 00:16:15.575 "is_configured": true, 00:16:15.575 "data_offset": 2048, 00:16:15.575 "data_size": 63488 00:16:15.575 }, 00:16:15.575 { 00:16:15.575 "name": "BaseBdev2", 00:16:15.575 "uuid": "0771efc8-59d6-56ed-a453-80af47f407b6", 00:16:15.575 "is_configured": true, 00:16:15.575 "data_offset": 2048, 00:16:15.575 "data_size": 63488 00:16:15.575 }, 00:16:15.575 { 00:16:15.575 "name": "BaseBdev3", 00:16:15.575 "uuid": "f415be5f-fc9d-5338-8924-264ab8b30644", 00:16:15.575 "is_configured": true, 00:16:15.575 "data_offset": 2048, 00:16:15.575 "data_size": 63488 00:16:15.576 }, 00:16:15.576 { 00:16:15.576 "name": "BaseBdev4", 00:16:15.576 "uuid": "393dfffb-d820-5cff-a011-718ac9e8995d", 00:16:15.576 "is_configured": true, 00:16:15.576 "data_offset": 2048, 00:16:15.576 "data_size": 63488 00:16:15.576 } 00:16:15.576 ] 00:16:15.576 }' 00:16:15.576 12:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:15.576 12:15:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.143 12:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:16.143 12:15:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.143 12:15:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.143 [2024-11-25 12:15:11.963155] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:16.143 [2024-11-25 12:15:11.963363] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:16.143 [2024-11-25 12:15:11.966811] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:16.143 [2024-11-25 12:15:11.967025] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:16.143 [2024-11-25 12:15:11.967101] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:16.143 [2024-11-25 12:15:11.967127] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:16:16.143 { 00:16:16.143 "results": [ 00:16:16.143 { 00:16:16.143 "job": "raid_bdev1", 00:16:16.143 "core_mask": "0x1", 00:16:16.143 "workload": "randrw", 00:16:16.143 "percentage": 50, 00:16:16.143 "status": "finished", 00:16:16.143 "queue_depth": 1, 00:16:16.143 "io_size": 131072, 00:16:16.143 "runtime": 1.388876, 00:16:16.144 "iops": 10676.979082365884, 00:16:16.144 "mibps": 1334.6223852957355, 00:16:16.144 "io_failed": 1, 00:16:16.144 "io_timeout": 0, 00:16:16.144 "avg_latency_us": 130.88985913075462, 00:16:16.144 "min_latency_us": 43.28727272727273, 00:16:16.144 "max_latency_us": 1839.4763636363637 00:16:16.144 } 00:16:16.144 ], 00:16:16.144 "core_count": 1 00:16:16.144 } 00:16:16.144 12:15:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.144 12:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73026 00:16:16.144 12:15:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 73026 ']' 00:16:16.144 12:15:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 73026 00:16:16.144 12:15:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:16:16.144 12:15:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:16.144 12:15:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73026 00:16:16.144 killing process with pid 73026 00:16:16.144 12:15:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:16.144 12:15:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:16.144 12:15:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73026' 00:16:16.144 12:15:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 73026 00:16:16.144 [2024-11-25 12:15:11.999821] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:16.144 12:15:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 73026 00:16:16.402 [2024-11-25 12:15:12.283289] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:17.339 12:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.a4TtTgb860 00:16:17.339 12:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:16:17.339 12:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:16:17.339 12:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:16:17.339 12:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:16:17.339 12:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:17.339 12:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:16:17.339 12:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:16:17.339 00:16:17.339 real 0m4.817s 00:16:17.339 user 0m5.947s 00:16:17.339 sys 0m0.605s 00:16:17.339 ************************************ 00:16:17.339 END TEST raid_read_error_test 00:16:17.339 ************************************ 00:16:17.339 12:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:17.339 12:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.339 12:15:13 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:16:17.339 12:15:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:17.339 12:15:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:17.339 12:15:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:17.339 ************************************ 00:16:17.339 START TEST raid_write_error_test 00:16:17.339 ************************************ 00:16:17.339 12:15:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:16:17.339 12:15:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:16:17.339 12:15:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:16:17.339 12:15:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:16:17.339 12:15:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:16:17.339 12:15:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:17.339 12:15:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:16:17.339 12:15:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:17.339 12:15:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:17.339 12:15:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:16:17.339 12:15:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:17.339 12:15:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:17.339 12:15:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:16:17.339 12:15:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:17.340 12:15:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:17.340 12:15:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:16:17.340 12:15:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:17.340 12:15:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:17.340 12:15:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:17.340 12:15:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:16:17.340 12:15:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:16:17.340 12:15:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:16:17.340 12:15:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:16:17.340 12:15:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:16:17.340 12:15:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:16:17.340 12:15:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:16:17.340 12:15:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:16:17.340 12:15:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:16:17.340 12:15:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:16:17.599 12:15:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.fNLkdxlbas 00:16:17.599 12:15:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73175 00:16:17.599 12:15:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73175 00:16:17.599 12:15:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:17.599 12:15:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 73175 ']' 00:16:17.599 12:15:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:17.599 12:15:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:17.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:17.599 12:15:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:17.599 12:15:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:17.599 12:15:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.599 [2024-11-25 12:15:13.521547] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:16:17.599 [2024-11-25 12:15:13.521887] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73175 ] 00:16:17.857 [2024-11-25 12:15:13.700432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.857 [2024-11-25 12:15:13.832611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.116 [2024-11-25 12:15:14.042707] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:18.116 [2024-11-25 12:15:14.043010] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:18.683 12:15:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:18.683 12:15:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:16:18.683 12:15:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:18.683 12:15:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:18.683 12:15:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.683 12:15:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.683 BaseBdev1_malloc 00:16:18.683 12:15:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.683 12:15:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:16:18.683 12:15:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.683 12:15:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.683 true 00:16:18.683 12:15:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.683 12:15:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:18.683 12:15:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.683 12:15:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.683 [2024-11-25 12:15:14.544973] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:18.683 [2024-11-25 12:15:14.545061] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.683 [2024-11-25 12:15:14.545110] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:16:18.683 [2024-11-25 12:15:14.545139] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.683 [2024-11-25 12:15:14.548072] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.683 [2024-11-25 12:15:14.548127] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:18.683 BaseBdev1 00:16:18.683 12:15:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.683 12:15:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:18.683 12:15:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:18.683 12:15:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.683 12:15:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.683 BaseBdev2_malloc 00:16:18.683 12:15:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.683 12:15:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:16:18.683 12:15:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.683 12:15:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.683 true 00:16:18.683 12:15:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.683 12:15:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:18.683 12:15:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.683 12:15:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.683 [2024-11-25 12:15:14.600871] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:18.683 [2024-11-25 12:15:14.600961] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.683 [2024-11-25 12:15:14.600991] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:18.683 [2024-11-25 12:15:14.601009] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.683 [2024-11-25 12:15:14.603838] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.683 [2024-11-25 12:15:14.603889] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:18.683 BaseBdev2 00:16:18.683 12:15:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.683 12:15:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:18.683 12:15:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:18.683 12:15:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.683 12:15:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.683 BaseBdev3_malloc 00:16:18.683 12:15:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.683 12:15:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:16:18.683 12:15:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.683 12:15:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.683 true 00:16:18.683 12:15:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.683 12:15:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:18.683 12:15:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.683 12:15:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.683 [2024-11-25 12:15:14.669534] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:18.683 [2024-11-25 12:15:14.669603] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.683 [2024-11-25 12:15:14.669633] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:18.683 [2024-11-25 12:15:14.669651] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.683 [2024-11-25 12:15:14.672462] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.683 [2024-11-25 12:15:14.672513] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:18.683 BaseBdev3 00:16:18.683 12:15:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.683 12:15:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:18.683 12:15:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:18.683 12:15:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.683 12:15:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.683 BaseBdev4_malloc 00:16:18.683 12:15:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.683 12:15:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:16:18.683 12:15:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.683 12:15:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.683 true 00:16:18.683 12:15:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.683 12:15:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:16:18.684 12:15:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.684 12:15:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.684 [2024-11-25 12:15:14.725790] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:16:18.684 [2024-11-25 12:15:14.725870] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.684 [2024-11-25 12:15:14.725896] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:18.684 [2024-11-25 12:15:14.725914] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.684 [2024-11-25 12:15:14.728793] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.684 [2024-11-25 12:15:14.728848] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:18.684 BaseBdev4 00:16:18.684 12:15:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.684 12:15:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:16:18.684 12:15:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.684 12:15:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.684 [2024-11-25 12:15:14.733842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:18.684 [2024-11-25 12:15:14.736472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:18.684 [2024-11-25 12:15:14.736731] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:18.684 [2024-11-25 12:15:14.737017] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:18.684 [2024-11-25 12:15:14.737469] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:16:18.684 [2024-11-25 12:15:14.737620] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:18.684 [2024-11-25 12:15:14.737995] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:16:18.684 [2024-11-25 12:15:14.738390] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:16:18.684 [2024-11-25 12:15:14.738537] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:16:18.684 [2024-11-25 12:15:14.738955] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:18.684 12:15:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.684 12:15:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:16:18.684 12:15:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:18.684 12:15:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:18.684 12:15:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:18.684 12:15:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:18.684 12:15:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:18.684 12:15:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.684 12:15:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.684 12:15:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.684 12:15:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.684 12:15:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.684 12:15:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.684 12:15:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.684 12:15:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.684 12:15:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.942 12:15:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.942 "name": "raid_bdev1", 00:16:18.942 "uuid": "44965be1-3320-4a56-b729-4c4c6146ae15", 00:16:18.942 "strip_size_kb": 64, 00:16:18.942 "state": "online", 00:16:18.942 "raid_level": "concat", 00:16:18.942 "superblock": true, 00:16:18.942 "num_base_bdevs": 4, 00:16:18.942 "num_base_bdevs_discovered": 4, 00:16:18.942 "num_base_bdevs_operational": 4, 00:16:18.942 "base_bdevs_list": [ 00:16:18.942 { 00:16:18.942 "name": "BaseBdev1", 00:16:18.942 "uuid": "7ef34a88-dfbb-5133-9826-1f2faeabc8aa", 00:16:18.942 "is_configured": true, 00:16:18.942 "data_offset": 2048, 00:16:18.942 "data_size": 63488 00:16:18.942 }, 00:16:18.942 { 00:16:18.942 "name": "BaseBdev2", 00:16:18.942 "uuid": "7fca9508-57ad-55fb-ab0c-5425f0918cfd", 00:16:18.942 "is_configured": true, 00:16:18.942 "data_offset": 2048, 00:16:18.942 "data_size": 63488 00:16:18.942 }, 00:16:18.942 { 00:16:18.942 "name": "BaseBdev3", 00:16:18.942 "uuid": "61e02c96-3953-56c9-9ad8-653918fcdce8", 00:16:18.942 "is_configured": true, 00:16:18.942 "data_offset": 2048, 00:16:18.942 "data_size": 63488 00:16:18.942 }, 00:16:18.942 { 00:16:18.942 "name": "BaseBdev4", 00:16:18.942 "uuid": "a2ab93d7-3505-5401-ba74-985cc8b7492c", 00:16:18.942 "is_configured": true, 00:16:18.942 "data_offset": 2048, 00:16:18.942 "data_size": 63488 00:16:18.942 } 00:16:18.942 ] 00:16:18.942 }' 00:16:18.942 12:15:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.942 12:15:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.200 12:15:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:16:19.200 12:15:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:19.458 [2024-11-25 12:15:15.348460] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:16:20.393 12:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:16:20.393 12:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.393 12:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.393 12:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.393 12:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:16:20.393 12:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:16:20.393 12:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:16:20.393 12:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:16:20.393 12:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:20.393 12:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:20.393 12:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:20.393 12:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:20.393 12:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:20.393 12:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.393 12:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.393 12:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.393 12:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.393 12:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.393 12:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.393 12:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.393 12:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.393 12:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.393 12:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.393 "name": "raid_bdev1", 00:16:20.393 "uuid": "44965be1-3320-4a56-b729-4c4c6146ae15", 00:16:20.393 "strip_size_kb": 64, 00:16:20.393 "state": "online", 00:16:20.393 "raid_level": "concat", 00:16:20.393 "superblock": true, 00:16:20.393 "num_base_bdevs": 4, 00:16:20.393 "num_base_bdevs_discovered": 4, 00:16:20.393 "num_base_bdevs_operational": 4, 00:16:20.393 "base_bdevs_list": [ 00:16:20.393 { 00:16:20.393 "name": "BaseBdev1", 00:16:20.393 "uuid": "7ef34a88-dfbb-5133-9826-1f2faeabc8aa", 00:16:20.393 "is_configured": true, 00:16:20.393 "data_offset": 2048, 00:16:20.393 "data_size": 63488 00:16:20.393 }, 00:16:20.393 { 00:16:20.393 "name": "BaseBdev2", 00:16:20.393 "uuid": "7fca9508-57ad-55fb-ab0c-5425f0918cfd", 00:16:20.393 "is_configured": true, 00:16:20.393 "data_offset": 2048, 00:16:20.393 "data_size": 63488 00:16:20.393 }, 00:16:20.393 { 00:16:20.393 "name": "BaseBdev3", 00:16:20.393 "uuid": "61e02c96-3953-56c9-9ad8-653918fcdce8", 00:16:20.393 "is_configured": true, 00:16:20.393 "data_offset": 2048, 00:16:20.393 "data_size": 63488 00:16:20.393 }, 00:16:20.393 { 00:16:20.393 "name": "BaseBdev4", 00:16:20.393 "uuid": "a2ab93d7-3505-5401-ba74-985cc8b7492c", 00:16:20.393 "is_configured": true, 00:16:20.393 "data_offset": 2048, 00:16:20.393 "data_size": 63488 00:16:20.393 } 00:16:20.393 ] 00:16:20.393 }' 00:16:20.393 12:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.393 12:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.961 12:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:20.961 12:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.961 12:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.961 [2024-11-25 12:15:16.783887] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:20.961 [2024-11-25 12:15:16.784072] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:20.961 [2024-11-25 12:15:16.787504] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:20.961 [2024-11-25 12:15:16.787729] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:20.961 [2024-11-25 12:15:16.787838] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:20.961 [2024-11-25 12:15:16.788028] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:16:20.961 { 00:16:20.961 "results": [ 00:16:20.961 { 00:16:20.961 "job": "raid_bdev1", 00:16:20.961 "core_mask": "0x1", 00:16:20.961 "workload": "randrw", 00:16:20.961 "percentage": 50, 00:16:20.961 "status": "finished", 00:16:20.961 "queue_depth": 1, 00:16:20.961 "io_size": 131072, 00:16:20.961 "runtime": 1.433284, 00:16:20.961 "iops": 10646.180380161923, 00:16:20.961 "mibps": 1330.7725475202403, 00:16:20.961 "io_failed": 1, 00:16:20.961 "io_timeout": 0, 00:16:20.961 "avg_latency_us": 131.2474147503872, 00:16:20.961 "min_latency_us": 42.589090909090906, 00:16:20.961 "max_latency_us": 1809.6872727272728 00:16:20.961 } 00:16:20.961 ], 00:16:20.961 "core_count": 1 00:16:20.961 } 00:16:20.961 12:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.961 12:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73175 00:16:20.961 12:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 73175 ']' 00:16:20.961 12:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 73175 00:16:20.961 12:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:16:20.961 12:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:20.961 12:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73175 00:16:20.961 killing process with pid 73175 00:16:20.961 12:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:20.961 12:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:20.961 12:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73175' 00:16:20.961 12:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 73175 00:16:20.961 [2024-11-25 12:15:16.824086] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:20.961 12:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 73175 00:16:21.219 [2024-11-25 12:15:17.113384] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:22.171 12:15:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.fNLkdxlbas 00:16:22.171 12:15:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:16:22.171 12:15:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:16:22.171 12:15:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:16:22.171 12:15:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:16:22.171 12:15:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:22.171 12:15:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:16:22.171 12:15:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:16:22.171 00:16:22.171 real 0m4.795s 00:16:22.171 user 0m5.868s 00:16:22.171 sys 0m0.585s 00:16:22.171 ************************************ 00:16:22.171 END TEST raid_write_error_test 00:16:22.171 ************************************ 00:16:22.171 12:15:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:22.171 12:15:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.171 12:15:18 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:16:22.171 12:15:18 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:16:22.171 12:15:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:22.171 12:15:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:22.171 12:15:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:22.430 ************************************ 00:16:22.430 START TEST raid_state_function_test 00:16:22.430 ************************************ 00:16:22.430 12:15:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:16:22.430 12:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:16:22.430 12:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:22.430 12:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:16:22.430 12:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:22.430 12:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:22.430 12:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:22.430 12:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:22.430 12:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:22.430 12:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:22.430 12:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:22.430 12:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:22.430 12:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:22.430 12:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:22.430 12:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:22.430 12:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:22.430 12:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:22.430 12:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:22.430 12:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:22.430 12:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:22.430 12:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:22.430 12:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:22.430 12:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:22.430 12:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:22.430 Process raid pid: 73319 00:16:22.430 12:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:22.430 12:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:16:22.430 12:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:16:22.430 12:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:16:22.430 12:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:16:22.430 12:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73319 00:16:22.430 12:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:22.431 12:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73319' 00:16:22.431 12:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73319 00:16:22.431 12:15:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 73319 ']' 00:16:22.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:22.431 12:15:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:22.431 12:15:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:22.431 12:15:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:22.431 12:15:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:22.431 12:15:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.431 [2024-11-25 12:15:18.357039] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:16:22.431 [2024-11-25 12:15:18.357415] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:22.689 [2024-11-25 12:15:18.533936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:22.689 [2024-11-25 12:15:18.664990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:22.947 [2024-11-25 12:15:18.874425] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:22.947 [2024-11-25 12:15:18.874677] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:23.513 12:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:23.513 12:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:16:23.513 12:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:23.513 12:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.513 12:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.513 [2024-11-25 12:15:19.385665] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:23.513 [2024-11-25 12:15:19.385731] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:23.513 [2024-11-25 12:15:19.385750] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:23.513 [2024-11-25 12:15:19.385766] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:23.513 [2024-11-25 12:15:19.385776] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:23.513 [2024-11-25 12:15:19.385790] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:23.513 [2024-11-25 12:15:19.385800] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:23.513 [2024-11-25 12:15:19.385814] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:23.513 12:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.513 12:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:23.513 12:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:23.513 12:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:23.513 12:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:23.513 12:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:23.513 12:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:23.513 12:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.513 12:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.513 12:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.513 12:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.513 12:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.513 12:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.513 12:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.513 12:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:23.513 12:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.513 12:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.513 "name": "Existed_Raid", 00:16:23.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.513 "strip_size_kb": 0, 00:16:23.513 "state": "configuring", 00:16:23.513 "raid_level": "raid1", 00:16:23.513 "superblock": false, 00:16:23.513 "num_base_bdevs": 4, 00:16:23.513 "num_base_bdevs_discovered": 0, 00:16:23.513 "num_base_bdevs_operational": 4, 00:16:23.513 "base_bdevs_list": [ 00:16:23.513 { 00:16:23.513 "name": "BaseBdev1", 00:16:23.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.513 "is_configured": false, 00:16:23.513 "data_offset": 0, 00:16:23.513 "data_size": 0 00:16:23.513 }, 00:16:23.513 { 00:16:23.513 "name": "BaseBdev2", 00:16:23.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.513 "is_configured": false, 00:16:23.513 "data_offset": 0, 00:16:23.513 "data_size": 0 00:16:23.513 }, 00:16:23.513 { 00:16:23.513 "name": "BaseBdev3", 00:16:23.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.513 "is_configured": false, 00:16:23.513 "data_offset": 0, 00:16:23.513 "data_size": 0 00:16:23.513 }, 00:16:23.513 { 00:16:23.513 "name": "BaseBdev4", 00:16:23.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.513 "is_configured": false, 00:16:23.513 "data_offset": 0, 00:16:23.513 "data_size": 0 00:16:23.513 } 00:16:23.513 ] 00:16:23.513 }' 00:16:23.513 12:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.513 12:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.079 12:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:24.079 12:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.079 12:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.079 [2024-11-25 12:15:19.893777] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:24.079 [2024-11-25 12:15:19.893841] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:24.079 12:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.080 12:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:24.080 12:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.080 12:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.080 [2024-11-25 12:15:19.901743] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:24.080 [2024-11-25 12:15:19.901796] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:24.080 [2024-11-25 12:15:19.901812] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:24.080 [2024-11-25 12:15:19.901838] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:24.080 [2024-11-25 12:15:19.901848] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:24.080 [2024-11-25 12:15:19.901862] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:24.080 [2024-11-25 12:15:19.901872] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:24.080 [2024-11-25 12:15:19.901886] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:24.080 12:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.080 12:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:24.080 12:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.080 12:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.080 [2024-11-25 12:15:19.946622] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:24.080 BaseBdev1 00:16:24.080 12:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.080 12:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:24.080 12:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:24.080 12:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:24.080 12:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:24.080 12:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:24.080 12:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:24.080 12:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:24.080 12:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.080 12:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.080 12:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.080 12:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:24.080 12:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.080 12:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.080 [ 00:16:24.080 { 00:16:24.080 "name": "BaseBdev1", 00:16:24.080 "aliases": [ 00:16:24.080 "74785ab5-d41a-4d93-b14d-fab262fae991" 00:16:24.080 ], 00:16:24.080 "product_name": "Malloc disk", 00:16:24.080 "block_size": 512, 00:16:24.080 "num_blocks": 65536, 00:16:24.080 "uuid": "74785ab5-d41a-4d93-b14d-fab262fae991", 00:16:24.080 "assigned_rate_limits": { 00:16:24.080 "rw_ios_per_sec": 0, 00:16:24.080 "rw_mbytes_per_sec": 0, 00:16:24.080 "r_mbytes_per_sec": 0, 00:16:24.080 "w_mbytes_per_sec": 0 00:16:24.080 }, 00:16:24.080 "claimed": true, 00:16:24.080 "claim_type": "exclusive_write", 00:16:24.080 "zoned": false, 00:16:24.080 "supported_io_types": { 00:16:24.080 "read": true, 00:16:24.080 "write": true, 00:16:24.080 "unmap": true, 00:16:24.080 "flush": true, 00:16:24.080 "reset": true, 00:16:24.080 "nvme_admin": false, 00:16:24.080 "nvme_io": false, 00:16:24.080 "nvme_io_md": false, 00:16:24.080 "write_zeroes": true, 00:16:24.080 "zcopy": true, 00:16:24.080 "get_zone_info": false, 00:16:24.080 "zone_management": false, 00:16:24.080 "zone_append": false, 00:16:24.080 "compare": false, 00:16:24.080 "compare_and_write": false, 00:16:24.080 "abort": true, 00:16:24.080 "seek_hole": false, 00:16:24.080 "seek_data": false, 00:16:24.080 "copy": true, 00:16:24.080 "nvme_iov_md": false 00:16:24.080 }, 00:16:24.080 "memory_domains": [ 00:16:24.080 { 00:16:24.080 "dma_device_id": "system", 00:16:24.080 "dma_device_type": 1 00:16:24.080 }, 00:16:24.080 { 00:16:24.080 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:24.080 "dma_device_type": 2 00:16:24.080 } 00:16:24.080 ], 00:16:24.080 "driver_specific": {} 00:16:24.080 } 00:16:24.080 ] 00:16:24.080 12:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.080 12:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:24.080 12:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:24.080 12:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:24.080 12:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:24.080 12:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:24.080 12:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:24.080 12:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:24.080 12:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.080 12:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.080 12:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.080 12:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.080 12:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.080 12:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:24.080 12:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.080 12:15:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.080 12:15:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.080 12:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.080 "name": "Existed_Raid", 00:16:24.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.080 "strip_size_kb": 0, 00:16:24.080 "state": "configuring", 00:16:24.080 "raid_level": "raid1", 00:16:24.080 "superblock": false, 00:16:24.080 "num_base_bdevs": 4, 00:16:24.080 "num_base_bdevs_discovered": 1, 00:16:24.080 "num_base_bdevs_operational": 4, 00:16:24.080 "base_bdevs_list": [ 00:16:24.080 { 00:16:24.080 "name": "BaseBdev1", 00:16:24.080 "uuid": "74785ab5-d41a-4d93-b14d-fab262fae991", 00:16:24.080 "is_configured": true, 00:16:24.080 "data_offset": 0, 00:16:24.080 "data_size": 65536 00:16:24.080 }, 00:16:24.080 { 00:16:24.080 "name": "BaseBdev2", 00:16:24.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.080 "is_configured": false, 00:16:24.080 "data_offset": 0, 00:16:24.080 "data_size": 0 00:16:24.080 }, 00:16:24.080 { 00:16:24.080 "name": "BaseBdev3", 00:16:24.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.080 "is_configured": false, 00:16:24.080 "data_offset": 0, 00:16:24.080 "data_size": 0 00:16:24.081 }, 00:16:24.081 { 00:16:24.081 "name": "BaseBdev4", 00:16:24.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.081 "is_configured": false, 00:16:24.081 "data_offset": 0, 00:16:24.081 "data_size": 0 00:16:24.081 } 00:16:24.081 ] 00:16:24.081 }' 00:16:24.081 12:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.081 12:15:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.648 12:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:24.648 12:15:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.648 12:15:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.648 [2024-11-25 12:15:20.486802] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:24.648 [2024-11-25 12:15:20.486868] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:24.648 12:15:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.648 12:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:24.648 12:15:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.648 12:15:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.648 [2024-11-25 12:15:20.494868] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:24.648 [2024-11-25 12:15:20.497262] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:24.648 [2024-11-25 12:15:20.497312] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:24.648 [2024-11-25 12:15:20.497328] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:24.648 [2024-11-25 12:15:20.497531] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:24.648 [2024-11-25 12:15:20.497592] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:24.648 [2024-11-25 12:15:20.497768] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:24.648 12:15:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.648 12:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:24.648 12:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:24.649 12:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:24.649 12:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:24.649 12:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:24.649 12:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:24.649 12:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:24.649 12:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:24.649 12:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.649 12:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.649 12:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.649 12:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.649 12:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.649 12:15:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.649 12:15:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.649 12:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:24.649 12:15:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.649 12:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.649 "name": "Existed_Raid", 00:16:24.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.649 "strip_size_kb": 0, 00:16:24.649 "state": "configuring", 00:16:24.649 "raid_level": "raid1", 00:16:24.649 "superblock": false, 00:16:24.649 "num_base_bdevs": 4, 00:16:24.649 "num_base_bdevs_discovered": 1, 00:16:24.649 "num_base_bdevs_operational": 4, 00:16:24.649 "base_bdevs_list": [ 00:16:24.649 { 00:16:24.649 "name": "BaseBdev1", 00:16:24.649 "uuid": "74785ab5-d41a-4d93-b14d-fab262fae991", 00:16:24.649 "is_configured": true, 00:16:24.649 "data_offset": 0, 00:16:24.649 "data_size": 65536 00:16:24.649 }, 00:16:24.649 { 00:16:24.649 "name": "BaseBdev2", 00:16:24.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.649 "is_configured": false, 00:16:24.649 "data_offset": 0, 00:16:24.649 "data_size": 0 00:16:24.649 }, 00:16:24.649 { 00:16:24.649 "name": "BaseBdev3", 00:16:24.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.649 "is_configured": false, 00:16:24.649 "data_offset": 0, 00:16:24.649 "data_size": 0 00:16:24.649 }, 00:16:24.649 { 00:16:24.649 "name": "BaseBdev4", 00:16:24.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.649 "is_configured": false, 00:16:24.649 "data_offset": 0, 00:16:24.649 "data_size": 0 00:16:24.649 } 00:16:24.649 ] 00:16:24.649 }' 00:16:24.649 12:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.649 12:15:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.928 12:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:24.928 12:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.928 12:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.188 [2024-11-25 12:15:21.049952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:25.188 BaseBdev2 00:16:25.188 12:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.188 12:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:25.188 12:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:25.188 12:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:25.188 12:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:25.188 12:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:25.188 12:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:25.188 12:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:25.188 12:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.188 12:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.188 12:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.188 12:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:25.188 12:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.188 12:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.188 [ 00:16:25.188 { 00:16:25.188 "name": "BaseBdev2", 00:16:25.188 "aliases": [ 00:16:25.189 "96437022-3f1a-4143-b403-81ed39492aac" 00:16:25.189 ], 00:16:25.189 "product_name": "Malloc disk", 00:16:25.189 "block_size": 512, 00:16:25.189 "num_blocks": 65536, 00:16:25.189 "uuid": "96437022-3f1a-4143-b403-81ed39492aac", 00:16:25.189 "assigned_rate_limits": { 00:16:25.189 "rw_ios_per_sec": 0, 00:16:25.189 "rw_mbytes_per_sec": 0, 00:16:25.189 "r_mbytes_per_sec": 0, 00:16:25.189 "w_mbytes_per_sec": 0 00:16:25.189 }, 00:16:25.189 "claimed": true, 00:16:25.189 "claim_type": "exclusive_write", 00:16:25.189 "zoned": false, 00:16:25.189 "supported_io_types": { 00:16:25.189 "read": true, 00:16:25.189 "write": true, 00:16:25.189 "unmap": true, 00:16:25.189 "flush": true, 00:16:25.189 "reset": true, 00:16:25.189 "nvme_admin": false, 00:16:25.189 "nvme_io": false, 00:16:25.189 "nvme_io_md": false, 00:16:25.189 "write_zeroes": true, 00:16:25.189 "zcopy": true, 00:16:25.189 "get_zone_info": false, 00:16:25.189 "zone_management": false, 00:16:25.189 "zone_append": false, 00:16:25.189 "compare": false, 00:16:25.189 "compare_and_write": false, 00:16:25.189 "abort": true, 00:16:25.189 "seek_hole": false, 00:16:25.189 "seek_data": false, 00:16:25.189 "copy": true, 00:16:25.189 "nvme_iov_md": false 00:16:25.189 }, 00:16:25.189 "memory_domains": [ 00:16:25.189 { 00:16:25.189 "dma_device_id": "system", 00:16:25.189 "dma_device_type": 1 00:16:25.189 }, 00:16:25.189 { 00:16:25.189 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.189 "dma_device_type": 2 00:16:25.189 } 00:16:25.189 ], 00:16:25.189 "driver_specific": {} 00:16:25.189 } 00:16:25.189 ] 00:16:25.189 12:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.189 12:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:25.189 12:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:25.189 12:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:25.189 12:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:25.189 12:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:25.189 12:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:25.189 12:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:25.189 12:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:25.189 12:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:25.189 12:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.189 12:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.189 12:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.189 12:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.189 12:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.189 12:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:25.189 12:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.189 12:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.189 12:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.189 12:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.189 "name": "Existed_Raid", 00:16:25.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.189 "strip_size_kb": 0, 00:16:25.189 "state": "configuring", 00:16:25.189 "raid_level": "raid1", 00:16:25.189 "superblock": false, 00:16:25.189 "num_base_bdevs": 4, 00:16:25.189 "num_base_bdevs_discovered": 2, 00:16:25.189 "num_base_bdevs_operational": 4, 00:16:25.189 "base_bdevs_list": [ 00:16:25.189 { 00:16:25.189 "name": "BaseBdev1", 00:16:25.189 "uuid": "74785ab5-d41a-4d93-b14d-fab262fae991", 00:16:25.189 "is_configured": true, 00:16:25.189 "data_offset": 0, 00:16:25.189 "data_size": 65536 00:16:25.189 }, 00:16:25.189 { 00:16:25.189 "name": "BaseBdev2", 00:16:25.189 "uuid": "96437022-3f1a-4143-b403-81ed39492aac", 00:16:25.189 "is_configured": true, 00:16:25.189 "data_offset": 0, 00:16:25.189 "data_size": 65536 00:16:25.189 }, 00:16:25.189 { 00:16:25.189 "name": "BaseBdev3", 00:16:25.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.189 "is_configured": false, 00:16:25.189 "data_offset": 0, 00:16:25.189 "data_size": 0 00:16:25.189 }, 00:16:25.189 { 00:16:25.189 "name": "BaseBdev4", 00:16:25.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.189 "is_configured": false, 00:16:25.189 "data_offset": 0, 00:16:25.189 "data_size": 0 00:16:25.189 } 00:16:25.189 ] 00:16:25.189 }' 00:16:25.189 12:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.189 12:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.760 12:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:25.760 12:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.760 12:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.761 [2024-11-25 12:15:21.656127] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:25.761 BaseBdev3 00:16:25.761 12:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.761 12:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:25.761 12:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:25.761 12:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:25.761 12:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:25.761 12:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:25.761 12:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:25.761 12:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:25.761 12:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.761 12:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.761 12:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.761 12:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:25.761 12:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.761 12:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.761 [ 00:16:25.761 { 00:16:25.761 "name": "BaseBdev3", 00:16:25.761 "aliases": [ 00:16:25.761 "920ba7fe-61aa-471d-9dd4-9d17fbc3923c" 00:16:25.761 ], 00:16:25.761 "product_name": "Malloc disk", 00:16:25.761 "block_size": 512, 00:16:25.761 "num_blocks": 65536, 00:16:25.761 "uuid": "920ba7fe-61aa-471d-9dd4-9d17fbc3923c", 00:16:25.761 "assigned_rate_limits": { 00:16:25.761 "rw_ios_per_sec": 0, 00:16:25.761 "rw_mbytes_per_sec": 0, 00:16:25.761 "r_mbytes_per_sec": 0, 00:16:25.761 "w_mbytes_per_sec": 0 00:16:25.761 }, 00:16:25.761 "claimed": true, 00:16:25.761 "claim_type": "exclusive_write", 00:16:25.761 "zoned": false, 00:16:25.761 "supported_io_types": { 00:16:25.761 "read": true, 00:16:25.761 "write": true, 00:16:25.761 "unmap": true, 00:16:25.761 "flush": true, 00:16:25.761 "reset": true, 00:16:25.761 "nvme_admin": false, 00:16:25.761 "nvme_io": false, 00:16:25.761 "nvme_io_md": false, 00:16:25.761 "write_zeroes": true, 00:16:25.761 "zcopy": true, 00:16:25.761 "get_zone_info": false, 00:16:25.761 "zone_management": false, 00:16:25.761 "zone_append": false, 00:16:25.761 "compare": false, 00:16:25.761 "compare_and_write": false, 00:16:25.761 "abort": true, 00:16:25.761 "seek_hole": false, 00:16:25.761 "seek_data": false, 00:16:25.761 "copy": true, 00:16:25.761 "nvme_iov_md": false 00:16:25.761 }, 00:16:25.761 "memory_domains": [ 00:16:25.761 { 00:16:25.761 "dma_device_id": "system", 00:16:25.761 "dma_device_type": 1 00:16:25.761 }, 00:16:25.761 { 00:16:25.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.761 "dma_device_type": 2 00:16:25.761 } 00:16:25.761 ], 00:16:25.761 "driver_specific": {} 00:16:25.761 } 00:16:25.761 ] 00:16:25.761 12:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.761 12:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:25.761 12:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:25.761 12:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:25.761 12:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:25.761 12:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:25.761 12:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:25.761 12:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:25.761 12:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:25.761 12:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:25.761 12:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.761 12:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.761 12:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.761 12:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.761 12:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.761 12:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.761 12:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:25.761 12:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.761 12:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.761 12:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.761 "name": "Existed_Raid", 00:16:25.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.761 "strip_size_kb": 0, 00:16:25.761 "state": "configuring", 00:16:25.761 "raid_level": "raid1", 00:16:25.761 "superblock": false, 00:16:25.761 "num_base_bdevs": 4, 00:16:25.761 "num_base_bdevs_discovered": 3, 00:16:25.761 "num_base_bdevs_operational": 4, 00:16:25.761 "base_bdevs_list": [ 00:16:25.761 { 00:16:25.761 "name": "BaseBdev1", 00:16:25.761 "uuid": "74785ab5-d41a-4d93-b14d-fab262fae991", 00:16:25.761 "is_configured": true, 00:16:25.761 "data_offset": 0, 00:16:25.761 "data_size": 65536 00:16:25.761 }, 00:16:25.761 { 00:16:25.761 "name": "BaseBdev2", 00:16:25.761 "uuid": "96437022-3f1a-4143-b403-81ed39492aac", 00:16:25.761 "is_configured": true, 00:16:25.761 "data_offset": 0, 00:16:25.761 "data_size": 65536 00:16:25.761 }, 00:16:25.761 { 00:16:25.761 "name": "BaseBdev3", 00:16:25.761 "uuid": "920ba7fe-61aa-471d-9dd4-9d17fbc3923c", 00:16:25.761 "is_configured": true, 00:16:25.761 "data_offset": 0, 00:16:25.761 "data_size": 65536 00:16:25.761 }, 00:16:25.761 { 00:16:25.761 "name": "BaseBdev4", 00:16:25.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.761 "is_configured": false, 00:16:25.761 "data_offset": 0, 00:16:25.761 "data_size": 0 00:16:25.761 } 00:16:25.761 ] 00:16:25.761 }' 00:16:25.761 12:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.761 12:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.329 12:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:26.329 12:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.329 12:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.329 [2024-11-25 12:15:22.198988] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:26.329 [2024-11-25 12:15:22.199260] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:26.329 [2024-11-25 12:15:22.199285] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:26.329 [2024-11-25 12:15:22.199688] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:26.329 [2024-11-25 12:15:22.199945] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:26.329 [2024-11-25 12:15:22.199980] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:26.329 [2024-11-25 12:15:22.200298] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:26.329 BaseBdev4 00:16:26.329 12:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.329 12:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:26.329 12:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:26.329 12:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:26.329 12:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:26.329 12:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:26.329 12:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:26.329 12:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:26.329 12:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.329 12:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.329 12:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.329 12:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:26.329 12:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.329 12:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.329 [ 00:16:26.329 { 00:16:26.329 "name": "BaseBdev4", 00:16:26.329 "aliases": [ 00:16:26.329 "2336b70d-aae9-414a-9df0-6ebdfdb0cb7c" 00:16:26.329 ], 00:16:26.329 "product_name": "Malloc disk", 00:16:26.329 "block_size": 512, 00:16:26.329 "num_blocks": 65536, 00:16:26.329 "uuid": "2336b70d-aae9-414a-9df0-6ebdfdb0cb7c", 00:16:26.329 "assigned_rate_limits": { 00:16:26.329 "rw_ios_per_sec": 0, 00:16:26.329 "rw_mbytes_per_sec": 0, 00:16:26.329 "r_mbytes_per_sec": 0, 00:16:26.329 "w_mbytes_per_sec": 0 00:16:26.329 }, 00:16:26.329 "claimed": true, 00:16:26.329 "claim_type": "exclusive_write", 00:16:26.329 "zoned": false, 00:16:26.329 "supported_io_types": { 00:16:26.329 "read": true, 00:16:26.329 "write": true, 00:16:26.329 "unmap": true, 00:16:26.329 "flush": true, 00:16:26.329 "reset": true, 00:16:26.329 "nvme_admin": false, 00:16:26.329 "nvme_io": false, 00:16:26.329 "nvme_io_md": false, 00:16:26.329 "write_zeroes": true, 00:16:26.329 "zcopy": true, 00:16:26.329 "get_zone_info": false, 00:16:26.329 "zone_management": false, 00:16:26.329 "zone_append": false, 00:16:26.329 "compare": false, 00:16:26.329 "compare_and_write": false, 00:16:26.329 "abort": true, 00:16:26.329 "seek_hole": false, 00:16:26.329 "seek_data": false, 00:16:26.329 "copy": true, 00:16:26.329 "nvme_iov_md": false 00:16:26.329 }, 00:16:26.329 "memory_domains": [ 00:16:26.329 { 00:16:26.329 "dma_device_id": "system", 00:16:26.329 "dma_device_type": 1 00:16:26.329 }, 00:16:26.329 { 00:16:26.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:26.329 "dma_device_type": 2 00:16:26.329 } 00:16:26.329 ], 00:16:26.329 "driver_specific": {} 00:16:26.329 } 00:16:26.329 ] 00:16:26.329 12:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.329 12:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:26.329 12:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:26.329 12:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:26.329 12:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:16:26.329 12:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:26.329 12:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:26.329 12:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:26.329 12:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:26.329 12:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:26.329 12:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.329 12:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.329 12:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.329 12:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.329 12:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.329 12:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:26.329 12:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.329 12:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.329 12:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.329 12:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.329 "name": "Existed_Raid", 00:16:26.329 "uuid": "2fd303bc-62ca-451b-95bc-e8728fc95e22", 00:16:26.329 "strip_size_kb": 0, 00:16:26.329 "state": "online", 00:16:26.329 "raid_level": "raid1", 00:16:26.329 "superblock": false, 00:16:26.329 "num_base_bdevs": 4, 00:16:26.329 "num_base_bdevs_discovered": 4, 00:16:26.329 "num_base_bdevs_operational": 4, 00:16:26.329 "base_bdevs_list": [ 00:16:26.329 { 00:16:26.329 "name": "BaseBdev1", 00:16:26.329 "uuid": "74785ab5-d41a-4d93-b14d-fab262fae991", 00:16:26.329 "is_configured": true, 00:16:26.329 "data_offset": 0, 00:16:26.329 "data_size": 65536 00:16:26.329 }, 00:16:26.329 { 00:16:26.329 "name": "BaseBdev2", 00:16:26.329 "uuid": "96437022-3f1a-4143-b403-81ed39492aac", 00:16:26.329 "is_configured": true, 00:16:26.329 "data_offset": 0, 00:16:26.329 "data_size": 65536 00:16:26.329 }, 00:16:26.329 { 00:16:26.329 "name": "BaseBdev3", 00:16:26.329 "uuid": "920ba7fe-61aa-471d-9dd4-9d17fbc3923c", 00:16:26.329 "is_configured": true, 00:16:26.329 "data_offset": 0, 00:16:26.329 "data_size": 65536 00:16:26.329 }, 00:16:26.329 { 00:16:26.329 "name": "BaseBdev4", 00:16:26.329 "uuid": "2336b70d-aae9-414a-9df0-6ebdfdb0cb7c", 00:16:26.329 "is_configured": true, 00:16:26.329 "data_offset": 0, 00:16:26.329 "data_size": 65536 00:16:26.329 } 00:16:26.329 ] 00:16:26.329 }' 00:16:26.329 12:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.329 12:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.896 12:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:26.896 12:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:26.896 12:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:26.896 12:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:26.896 12:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:26.896 12:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:26.896 12:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:26.896 12:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.896 12:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.896 12:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:26.896 [2024-11-25 12:15:22.767649] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:26.896 12:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.896 12:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:26.896 "name": "Existed_Raid", 00:16:26.896 "aliases": [ 00:16:26.896 "2fd303bc-62ca-451b-95bc-e8728fc95e22" 00:16:26.897 ], 00:16:26.897 "product_name": "Raid Volume", 00:16:26.897 "block_size": 512, 00:16:26.897 "num_blocks": 65536, 00:16:26.897 "uuid": "2fd303bc-62ca-451b-95bc-e8728fc95e22", 00:16:26.897 "assigned_rate_limits": { 00:16:26.897 "rw_ios_per_sec": 0, 00:16:26.897 "rw_mbytes_per_sec": 0, 00:16:26.897 "r_mbytes_per_sec": 0, 00:16:26.897 "w_mbytes_per_sec": 0 00:16:26.897 }, 00:16:26.897 "claimed": false, 00:16:26.897 "zoned": false, 00:16:26.897 "supported_io_types": { 00:16:26.897 "read": true, 00:16:26.897 "write": true, 00:16:26.897 "unmap": false, 00:16:26.897 "flush": false, 00:16:26.897 "reset": true, 00:16:26.897 "nvme_admin": false, 00:16:26.897 "nvme_io": false, 00:16:26.897 "nvme_io_md": false, 00:16:26.897 "write_zeroes": true, 00:16:26.897 "zcopy": false, 00:16:26.897 "get_zone_info": false, 00:16:26.897 "zone_management": false, 00:16:26.897 "zone_append": false, 00:16:26.897 "compare": false, 00:16:26.897 "compare_and_write": false, 00:16:26.897 "abort": false, 00:16:26.897 "seek_hole": false, 00:16:26.897 "seek_data": false, 00:16:26.897 "copy": false, 00:16:26.897 "nvme_iov_md": false 00:16:26.897 }, 00:16:26.897 "memory_domains": [ 00:16:26.897 { 00:16:26.897 "dma_device_id": "system", 00:16:26.897 "dma_device_type": 1 00:16:26.897 }, 00:16:26.897 { 00:16:26.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:26.897 "dma_device_type": 2 00:16:26.897 }, 00:16:26.897 { 00:16:26.897 "dma_device_id": "system", 00:16:26.897 "dma_device_type": 1 00:16:26.897 }, 00:16:26.897 { 00:16:26.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:26.897 "dma_device_type": 2 00:16:26.897 }, 00:16:26.897 { 00:16:26.897 "dma_device_id": "system", 00:16:26.897 "dma_device_type": 1 00:16:26.897 }, 00:16:26.897 { 00:16:26.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:26.897 "dma_device_type": 2 00:16:26.897 }, 00:16:26.897 { 00:16:26.897 "dma_device_id": "system", 00:16:26.897 "dma_device_type": 1 00:16:26.897 }, 00:16:26.897 { 00:16:26.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:26.897 "dma_device_type": 2 00:16:26.897 } 00:16:26.897 ], 00:16:26.897 "driver_specific": { 00:16:26.897 "raid": { 00:16:26.897 "uuid": "2fd303bc-62ca-451b-95bc-e8728fc95e22", 00:16:26.897 "strip_size_kb": 0, 00:16:26.897 "state": "online", 00:16:26.897 "raid_level": "raid1", 00:16:26.897 "superblock": false, 00:16:26.897 "num_base_bdevs": 4, 00:16:26.897 "num_base_bdevs_discovered": 4, 00:16:26.897 "num_base_bdevs_operational": 4, 00:16:26.897 "base_bdevs_list": [ 00:16:26.897 { 00:16:26.897 "name": "BaseBdev1", 00:16:26.897 "uuid": "74785ab5-d41a-4d93-b14d-fab262fae991", 00:16:26.897 "is_configured": true, 00:16:26.897 "data_offset": 0, 00:16:26.897 "data_size": 65536 00:16:26.897 }, 00:16:26.897 { 00:16:26.897 "name": "BaseBdev2", 00:16:26.897 "uuid": "96437022-3f1a-4143-b403-81ed39492aac", 00:16:26.897 "is_configured": true, 00:16:26.897 "data_offset": 0, 00:16:26.897 "data_size": 65536 00:16:26.897 }, 00:16:26.897 { 00:16:26.897 "name": "BaseBdev3", 00:16:26.897 "uuid": "920ba7fe-61aa-471d-9dd4-9d17fbc3923c", 00:16:26.897 "is_configured": true, 00:16:26.897 "data_offset": 0, 00:16:26.897 "data_size": 65536 00:16:26.897 }, 00:16:26.897 { 00:16:26.897 "name": "BaseBdev4", 00:16:26.897 "uuid": "2336b70d-aae9-414a-9df0-6ebdfdb0cb7c", 00:16:26.897 "is_configured": true, 00:16:26.897 "data_offset": 0, 00:16:26.897 "data_size": 65536 00:16:26.897 } 00:16:26.897 ] 00:16:26.897 } 00:16:26.897 } 00:16:26.897 }' 00:16:26.897 12:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:26.897 12:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:26.897 BaseBdev2 00:16:26.897 BaseBdev3 00:16:26.897 BaseBdev4' 00:16:26.897 12:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:26.897 12:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:26.897 12:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:26.897 12:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:26.897 12:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.897 12:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.897 12:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:26.897 12:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.897 12:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:26.897 12:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:26.897 12:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:26.897 12:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:26.897 12:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:26.897 12:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.897 12:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.897 12:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.157 12:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:27.157 12:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:27.157 12:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:27.157 12:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:27.157 12:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.157 12:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:27.157 12:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.157 12:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.157 12:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:27.157 12:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:27.157 12:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:27.157 12:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:27.157 12:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:27.157 12:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.157 12:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.157 12:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.157 12:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:27.157 12:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:27.157 12:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:27.157 12:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.157 12:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.157 [2024-11-25 12:15:23.127370] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:27.157 12:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.157 12:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:27.157 12:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:16:27.157 12:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:27.157 12:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:27.157 12:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:27.157 12:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:16:27.157 12:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:27.157 12:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:27.157 12:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:27.157 12:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:27.157 12:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:27.157 12:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.157 12:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.157 12:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.157 12:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.157 12:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.157 12:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:27.157 12:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.157 12:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.157 12:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.417 12:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.417 "name": "Existed_Raid", 00:16:27.417 "uuid": "2fd303bc-62ca-451b-95bc-e8728fc95e22", 00:16:27.417 "strip_size_kb": 0, 00:16:27.417 "state": "online", 00:16:27.417 "raid_level": "raid1", 00:16:27.417 "superblock": false, 00:16:27.417 "num_base_bdevs": 4, 00:16:27.417 "num_base_bdevs_discovered": 3, 00:16:27.417 "num_base_bdevs_operational": 3, 00:16:27.417 "base_bdevs_list": [ 00:16:27.417 { 00:16:27.417 "name": null, 00:16:27.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.417 "is_configured": false, 00:16:27.417 "data_offset": 0, 00:16:27.417 "data_size": 65536 00:16:27.417 }, 00:16:27.417 { 00:16:27.417 "name": "BaseBdev2", 00:16:27.417 "uuid": "96437022-3f1a-4143-b403-81ed39492aac", 00:16:27.417 "is_configured": true, 00:16:27.417 "data_offset": 0, 00:16:27.417 "data_size": 65536 00:16:27.417 }, 00:16:27.417 { 00:16:27.417 "name": "BaseBdev3", 00:16:27.417 "uuid": "920ba7fe-61aa-471d-9dd4-9d17fbc3923c", 00:16:27.417 "is_configured": true, 00:16:27.417 "data_offset": 0, 00:16:27.417 "data_size": 65536 00:16:27.417 }, 00:16:27.417 { 00:16:27.417 "name": "BaseBdev4", 00:16:27.417 "uuid": "2336b70d-aae9-414a-9df0-6ebdfdb0cb7c", 00:16:27.417 "is_configured": true, 00:16:27.417 "data_offset": 0, 00:16:27.417 "data_size": 65536 00:16:27.417 } 00:16:27.417 ] 00:16:27.417 }' 00:16:27.417 12:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.418 12:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.676 12:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:27.676 12:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:27.676 12:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.676 12:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.677 12:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.677 12:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:27.935 12:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.935 12:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:27.935 12:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:27.935 12:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:27.935 12:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.935 12:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.935 [2024-11-25 12:15:23.806193] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:27.935 12:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.935 12:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:27.935 12:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:27.935 12:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.935 12:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:27.935 12:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.935 12:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.935 12:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.935 12:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:27.935 12:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:27.935 12:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:27.935 12:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.935 12:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.935 [2024-11-25 12:15:23.943220] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:28.273 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.273 12:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:28.273 12:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:28.273 12:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.273 12:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:28.273 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.273 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.273 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.273 12:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:28.273 12:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:28.273 12:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:28.273 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.273 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.273 [2024-11-25 12:15:24.086990] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:28.273 [2024-11-25 12:15:24.087264] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:28.273 [2024-11-25 12:15:24.171507] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:28.273 [2024-11-25 12:15:24.171764] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:28.273 [2024-11-25 12:15:24.171943] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:28.273 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.273 12:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:28.273 12:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:28.273 12:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.273 12:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:28.273 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.273 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.273 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.273 12:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:28.273 12:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:28.273 12:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:28.273 12:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:28.273 12:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:28.273 12:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:28.273 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.273 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.273 BaseBdev2 00:16:28.273 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.273 12:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:28.273 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:28.273 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:28.273 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:28.273 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:28.273 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:28.273 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:28.273 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.273 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.273 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.273 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:28.273 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.273 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.273 [ 00:16:28.273 { 00:16:28.273 "name": "BaseBdev2", 00:16:28.273 "aliases": [ 00:16:28.273 "9efae11e-a256-4ff3-b404-41e5d0450de1" 00:16:28.273 ], 00:16:28.273 "product_name": "Malloc disk", 00:16:28.273 "block_size": 512, 00:16:28.273 "num_blocks": 65536, 00:16:28.273 "uuid": "9efae11e-a256-4ff3-b404-41e5d0450de1", 00:16:28.273 "assigned_rate_limits": { 00:16:28.273 "rw_ios_per_sec": 0, 00:16:28.273 "rw_mbytes_per_sec": 0, 00:16:28.273 "r_mbytes_per_sec": 0, 00:16:28.273 "w_mbytes_per_sec": 0 00:16:28.273 }, 00:16:28.273 "claimed": false, 00:16:28.273 "zoned": false, 00:16:28.273 "supported_io_types": { 00:16:28.273 "read": true, 00:16:28.273 "write": true, 00:16:28.273 "unmap": true, 00:16:28.273 "flush": true, 00:16:28.273 "reset": true, 00:16:28.273 "nvme_admin": false, 00:16:28.273 "nvme_io": false, 00:16:28.273 "nvme_io_md": false, 00:16:28.273 "write_zeroes": true, 00:16:28.273 "zcopy": true, 00:16:28.273 "get_zone_info": false, 00:16:28.273 "zone_management": false, 00:16:28.273 "zone_append": false, 00:16:28.273 "compare": false, 00:16:28.273 "compare_and_write": false, 00:16:28.273 "abort": true, 00:16:28.273 "seek_hole": false, 00:16:28.273 "seek_data": false, 00:16:28.273 "copy": true, 00:16:28.273 "nvme_iov_md": false 00:16:28.273 }, 00:16:28.273 "memory_domains": [ 00:16:28.273 { 00:16:28.273 "dma_device_id": "system", 00:16:28.273 "dma_device_type": 1 00:16:28.273 }, 00:16:28.273 { 00:16:28.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:28.273 "dma_device_type": 2 00:16:28.273 } 00:16:28.273 ], 00:16:28.273 "driver_specific": {} 00:16:28.273 } 00:16:28.273 ] 00:16:28.273 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.273 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:28.273 12:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:28.273 12:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:28.273 12:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:28.273 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.273 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.273 BaseBdev3 00:16:28.273 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.273 12:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:28.273 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:28.273 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:28.273 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:28.273 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:28.273 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:28.273 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:28.273 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.273 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.273 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.273 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:28.273 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.273 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.273 [ 00:16:28.273 { 00:16:28.273 "name": "BaseBdev3", 00:16:28.273 "aliases": [ 00:16:28.273 "34c9dd1d-48db-4586-a602-a4b3144f31eb" 00:16:28.273 ], 00:16:28.273 "product_name": "Malloc disk", 00:16:28.273 "block_size": 512, 00:16:28.273 "num_blocks": 65536, 00:16:28.273 "uuid": "34c9dd1d-48db-4586-a602-a4b3144f31eb", 00:16:28.273 "assigned_rate_limits": { 00:16:28.273 "rw_ios_per_sec": 0, 00:16:28.273 "rw_mbytes_per_sec": 0, 00:16:28.273 "r_mbytes_per_sec": 0, 00:16:28.273 "w_mbytes_per_sec": 0 00:16:28.273 }, 00:16:28.273 "claimed": false, 00:16:28.273 "zoned": false, 00:16:28.273 "supported_io_types": { 00:16:28.273 "read": true, 00:16:28.273 "write": true, 00:16:28.273 "unmap": true, 00:16:28.273 "flush": true, 00:16:28.273 "reset": true, 00:16:28.274 "nvme_admin": false, 00:16:28.274 "nvme_io": false, 00:16:28.274 "nvme_io_md": false, 00:16:28.274 "write_zeroes": true, 00:16:28.274 "zcopy": true, 00:16:28.274 "get_zone_info": false, 00:16:28.274 "zone_management": false, 00:16:28.274 "zone_append": false, 00:16:28.274 "compare": false, 00:16:28.533 "compare_and_write": false, 00:16:28.533 "abort": true, 00:16:28.533 "seek_hole": false, 00:16:28.533 "seek_data": false, 00:16:28.533 "copy": true, 00:16:28.533 "nvme_iov_md": false 00:16:28.533 }, 00:16:28.533 "memory_domains": [ 00:16:28.533 { 00:16:28.533 "dma_device_id": "system", 00:16:28.533 "dma_device_type": 1 00:16:28.533 }, 00:16:28.533 { 00:16:28.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:28.533 "dma_device_type": 2 00:16:28.533 } 00:16:28.533 ], 00:16:28.533 "driver_specific": {} 00:16:28.533 } 00:16:28.533 ] 00:16:28.533 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.533 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:28.533 12:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:28.533 12:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:28.533 12:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:28.533 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.533 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.533 BaseBdev4 00:16:28.533 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.533 12:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:28.533 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:28.533 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:28.533 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:28.533 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:28.533 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:28.533 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:28.533 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.533 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.533 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.533 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:28.533 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.533 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.533 [ 00:16:28.533 { 00:16:28.533 "name": "BaseBdev4", 00:16:28.533 "aliases": [ 00:16:28.533 "265c7911-e892-4dc5-8640-8a77c3831ba4" 00:16:28.533 ], 00:16:28.533 "product_name": "Malloc disk", 00:16:28.533 "block_size": 512, 00:16:28.533 "num_blocks": 65536, 00:16:28.533 "uuid": "265c7911-e892-4dc5-8640-8a77c3831ba4", 00:16:28.533 "assigned_rate_limits": { 00:16:28.533 "rw_ios_per_sec": 0, 00:16:28.533 "rw_mbytes_per_sec": 0, 00:16:28.533 "r_mbytes_per_sec": 0, 00:16:28.533 "w_mbytes_per_sec": 0 00:16:28.533 }, 00:16:28.533 "claimed": false, 00:16:28.533 "zoned": false, 00:16:28.533 "supported_io_types": { 00:16:28.533 "read": true, 00:16:28.533 "write": true, 00:16:28.533 "unmap": true, 00:16:28.533 "flush": true, 00:16:28.533 "reset": true, 00:16:28.533 "nvme_admin": false, 00:16:28.533 "nvme_io": false, 00:16:28.533 "nvme_io_md": false, 00:16:28.533 "write_zeroes": true, 00:16:28.533 "zcopy": true, 00:16:28.533 "get_zone_info": false, 00:16:28.533 "zone_management": false, 00:16:28.533 "zone_append": false, 00:16:28.533 "compare": false, 00:16:28.533 "compare_and_write": false, 00:16:28.533 "abort": true, 00:16:28.533 "seek_hole": false, 00:16:28.533 "seek_data": false, 00:16:28.533 "copy": true, 00:16:28.533 "nvme_iov_md": false 00:16:28.533 }, 00:16:28.533 "memory_domains": [ 00:16:28.533 { 00:16:28.533 "dma_device_id": "system", 00:16:28.533 "dma_device_type": 1 00:16:28.533 }, 00:16:28.533 { 00:16:28.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:28.533 "dma_device_type": 2 00:16:28.533 } 00:16:28.533 ], 00:16:28.533 "driver_specific": {} 00:16:28.533 } 00:16:28.533 ] 00:16:28.533 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.533 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:28.533 12:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:28.533 12:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:28.533 12:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:28.533 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.533 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.533 [2024-11-25 12:15:24.451277] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:28.533 [2024-11-25 12:15:24.451494] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:28.533 [2024-11-25 12:15:24.451644] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:28.533 [2024-11-25 12:15:24.454098] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:28.533 [2024-11-25 12:15:24.454298] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:28.533 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.533 12:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:28.533 12:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:28.533 12:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:28.533 12:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:28.533 12:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:28.533 12:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:28.533 12:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.533 12:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.533 12:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.533 12:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.533 12:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.533 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.533 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.533 12:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:28.533 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.533 12:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.533 "name": "Existed_Raid", 00:16:28.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.533 "strip_size_kb": 0, 00:16:28.534 "state": "configuring", 00:16:28.534 "raid_level": "raid1", 00:16:28.534 "superblock": false, 00:16:28.534 "num_base_bdevs": 4, 00:16:28.534 "num_base_bdevs_discovered": 3, 00:16:28.534 "num_base_bdevs_operational": 4, 00:16:28.534 "base_bdevs_list": [ 00:16:28.534 { 00:16:28.534 "name": "BaseBdev1", 00:16:28.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.534 "is_configured": false, 00:16:28.534 "data_offset": 0, 00:16:28.534 "data_size": 0 00:16:28.534 }, 00:16:28.534 { 00:16:28.534 "name": "BaseBdev2", 00:16:28.534 "uuid": "9efae11e-a256-4ff3-b404-41e5d0450de1", 00:16:28.534 "is_configured": true, 00:16:28.534 "data_offset": 0, 00:16:28.534 "data_size": 65536 00:16:28.534 }, 00:16:28.534 { 00:16:28.534 "name": "BaseBdev3", 00:16:28.534 "uuid": "34c9dd1d-48db-4586-a602-a4b3144f31eb", 00:16:28.534 "is_configured": true, 00:16:28.534 "data_offset": 0, 00:16:28.534 "data_size": 65536 00:16:28.534 }, 00:16:28.534 { 00:16:28.534 "name": "BaseBdev4", 00:16:28.534 "uuid": "265c7911-e892-4dc5-8640-8a77c3831ba4", 00:16:28.534 "is_configured": true, 00:16:28.534 "data_offset": 0, 00:16:28.534 "data_size": 65536 00:16:28.534 } 00:16:28.534 ] 00:16:28.534 }' 00:16:28.534 12:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.534 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.101 12:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:29.101 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.101 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.101 [2024-11-25 12:15:24.947451] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:29.101 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.101 12:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:29.101 12:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:29.101 12:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:29.101 12:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:29.101 12:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:29.101 12:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:29.101 12:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.101 12:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.101 12:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.101 12:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.101 12:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.101 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.101 12:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:29.101 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.101 12:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.101 12:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.101 "name": "Existed_Raid", 00:16:29.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.101 "strip_size_kb": 0, 00:16:29.101 "state": "configuring", 00:16:29.101 "raid_level": "raid1", 00:16:29.102 "superblock": false, 00:16:29.102 "num_base_bdevs": 4, 00:16:29.102 "num_base_bdevs_discovered": 2, 00:16:29.102 "num_base_bdevs_operational": 4, 00:16:29.102 "base_bdevs_list": [ 00:16:29.102 { 00:16:29.102 "name": "BaseBdev1", 00:16:29.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.102 "is_configured": false, 00:16:29.102 "data_offset": 0, 00:16:29.102 "data_size": 0 00:16:29.102 }, 00:16:29.102 { 00:16:29.102 "name": null, 00:16:29.102 "uuid": "9efae11e-a256-4ff3-b404-41e5d0450de1", 00:16:29.102 "is_configured": false, 00:16:29.102 "data_offset": 0, 00:16:29.102 "data_size": 65536 00:16:29.102 }, 00:16:29.102 { 00:16:29.102 "name": "BaseBdev3", 00:16:29.102 "uuid": "34c9dd1d-48db-4586-a602-a4b3144f31eb", 00:16:29.102 "is_configured": true, 00:16:29.102 "data_offset": 0, 00:16:29.102 "data_size": 65536 00:16:29.102 }, 00:16:29.102 { 00:16:29.102 "name": "BaseBdev4", 00:16:29.102 "uuid": "265c7911-e892-4dc5-8640-8a77c3831ba4", 00:16:29.102 "is_configured": true, 00:16:29.102 "data_offset": 0, 00:16:29.102 "data_size": 65536 00:16:29.102 } 00:16:29.102 ] 00:16:29.102 }' 00:16:29.102 12:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.102 12:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.360 12:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.360 12:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.360 12:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:29.360 12:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.360 12:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.619 12:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:29.619 12:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:29.619 12:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.619 12:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.619 [2024-11-25 12:15:25.514118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:29.619 BaseBdev1 00:16:29.619 12:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.619 12:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:29.620 12:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:29.620 12:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:29.620 12:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:29.620 12:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:29.620 12:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:29.620 12:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:29.620 12:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.620 12:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.620 12:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.620 12:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:29.620 12:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.620 12:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.620 [ 00:16:29.620 { 00:16:29.620 "name": "BaseBdev1", 00:16:29.620 "aliases": [ 00:16:29.620 "3fdb4a86-33e0-4c7d-85ad-91f9bb5fa6a3" 00:16:29.620 ], 00:16:29.620 "product_name": "Malloc disk", 00:16:29.620 "block_size": 512, 00:16:29.620 "num_blocks": 65536, 00:16:29.620 "uuid": "3fdb4a86-33e0-4c7d-85ad-91f9bb5fa6a3", 00:16:29.620 "assigned_rate_limits": { 00:16:29.620 "rw_ios_per_sec": 0, 00:16:29.620 "rw_mbytes_per_sec": 0, 00:16:29.620 "r_mbytes_per_sec": 0, 00:16:29.620 "w_mbytes_per_sec": 0 00:16:29.620 }, 00:16:29.620 "claimed": true, 00:16:29.620 "claim_type": "exclusive_write", 00:16:29.620 "zoned": false, 00:16:29.620 "supported_io_types": { 00:16:29.620 "read": true, 00:16:29.620 "write": true, 00:16:29.620 "unmap": true, 00:16:29.620 "flush": true, 00:16:29.620 "reset": true, 00:16:29.620 "nvme_admin": false, 00:16:29.620 "nvme_io": false, 00:16:29.620 "nvme_io_md": false, 00:16:29.620 "write_zeroes": true, 00:16:29.620 "zcopy": true, 00:16:29.620 "get_zone_info": false, 00:16:29.620 "zone_management": false, 00:16:29.620 "zone_append": false, 00:16:29.620 "compare": false, 00:16:29.620 "compare_and_write": false, 00:16:29.620 "abort": true, 00:16:29.620 "seek_hole": false, 00:16:29.620 "seek_data": false, 00:16:29.620 "copy": true, 00:16:29.620 "nvme_iov_md": false 00:16:29.620 }, 00:16:29.620 "memory_domains": [ 00:16:29.620 { 00:16:29.620 "dma_device_id": "system", 00:16:29.620 "dma_device_type": 1 00:16:29.620 }, 00:16:29.620 { 00:16:29.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:29.620 "dma_device_type": 2 00:16:29.620 } 00:16:29.620 ], 00:16:29.620 "driver_specific": {} 00:16:29.620 } 00:16:29.620 ] 00:16:29.620 12:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.620 12:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:29.620 12:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:29.620 12:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:29.620 12:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:29.620 12:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:29.620 12:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:29.620 12:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:29.620 12:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.620 12:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.620 12:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.620 12:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.620 12:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:29.620 12:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.620 12:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.620 12:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.620 12:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.620 12:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.620 "name": "Existed_Raid", 00:16:29.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.620 "strip_size_kb": 0, 00:16:29.620 "state": "configuring", 00:16:29.620 "raid_level": "raid1", 00:16:29.620 "superblock": false, 00:16:29.620 "num_base_bdevs": 4, 00:16:29.620 "num_base_bdevs_discovered": 3, 00:16:29.620 "num_base_bdevs_operational": 4, 00:16:29.620 "base_bdevs_list": [ 00:16:29.620 { 00:16:29.620 "name": "BaseBdev1", 00:16:29.620 "uuid": "3fdb4a86-33e0-4c7d-85ad-91f9bb5fa6a3", 00:16:29.620 "is_configured": true, 00:16:29.620 "data_offset": 0, 00:16:29.620 "data_size": 65536 00:16:29.620 }, 00:16:29.620 { 00:16:29.620 "name": null, 00:16:29.620 "uuid": "9efae11e-a256-4ff3-b404-41e5d0450de1", 00:16:29.620 "is_configured": false, 00:16:29.620 "data_offset": 0, 00:16:29.620 "data_size": 65536 00:16:29.620 }, 00:16:29.620 { 00:16:29.620 "name": "BaseBdev3", 00:16:29.620 "uuid": "34c9dd1d-48db-4586-a602-a4b3144f31eb", 00:16:29.620 "is_configured": true, 00:16:29.620 "data_offset": 0, 00:16:29.620 "data_size": 65536 00:16:29.620 }, 00:16:29.620 { 00:16:29.620 "name": "BaseBdev4", 00:16:29.620 "uuid": "265c7911-e892-4dc5-8640-8a77c3831ba4", 00:16:29.620 "is_configured": true, 00:16:29.620 "data_offset": 0, 00:16:29.620 "data_size": 65536 00:16:29.620 } 00:16:29.620 ] 00:16:29.620 }' 00:16:29.620 12:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.620 12:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.187 12:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.187 12:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:30.187 12:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.187 12:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.187 12:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.187 12:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:30.187 12:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:30.187 12:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.187 12:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.187 [2024-11-25 12:15:26.090404] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:30.187 12:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.187 12:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:30.187 12:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:30.187 12:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:30.187 12:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:30.187 12:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:30.187 12:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:30.187 12:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.187 12:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.187 12:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.187 12:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.187 12:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.187 12:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.187 12:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.187 12:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.187 12:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.187 12:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.187 "name": "Existed_Raid", 00:16:30.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.187 "strip_size_kb": 0, 00:16:30.187 "state": "configuring", 00:16:30.187 "raid_level": "raid1", 00:16:30.187 "superblock": false, 00:16:30.187 "num_base_bdevs": 4, 00:16:30.187 "num_base_bdevs_discovered": 2, 00:16:30.187 "num_base_bdevs_operational": 4, 00:16:30.187 "base_bdevs_list": [ 00:16:30.187 { 00:16:30.187 "name": "BaseBdev1", 00:16:30.187 "uuid": "3fdb4a86-33e0-4c7d-85ad-91f9bb5fa6a3", 00:16:30.187 "is_configured": true, 00:16:30.187 "data_offset": 0, 00:16:30.187 "data_size": 65536 00:16:30.187 }, 00:16:30.187 { 00:16:30.187 "name": null, 00:16:30.187 "uuid": "9efae11e-a256-4ff3-b404-41e5d0450de1", 00:16:30.187 "is_configured": false, 00:16:30.187 "data_offset": 0, 00:16:30.187 "data_size": 65536 00:16:30.187 }, 00:16:30.187 { 00:16:30.187 "name": null, 00:16:30.187 "uuid": "34c9dd1d-48db-4586-a602-a4b3144f31eb", 00:16:30.187 "is_configured": false, 00:16:30.187 "data_offset": 0, 00:16:30.187 "data_size": 65536 00:16:30.187 }, 00:16:30.187 { 00:16:30.187 "name": "BaseBdev4", 00:16:30.187 "uuid": "265c7911-e892-4dc5-8640-8a77c3831ba4", 00:16:30.187 "is_configured": true, 00:16:30.187 "data_offset": 0, 00:16:30.187 "data_size": 65536 00:16:30.187 } 00:16:30.187 ] 00:16:30.187 }' 00:16:30.187 12:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.187 12:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.755 12:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.755 12:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.755 12:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:30.755 12:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.755 12:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.755 12:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:30.755 12:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:30.755 12:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.755 12:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.755 [2024-11-25 12:15:26.650525] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:30.755 12:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.755 12:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:30.755 12:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:30.755 12:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:30.755 12:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:30.755 12:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:30.755 12:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:30.755 12:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.755 12:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.755 12:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.755 12:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.755 12:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.755 12:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.755 12:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.755 12:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.755 12:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.755 12:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.755 "name": "Existed_Raid", 00:16:30.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.755 "strip_size_kb": 0, 00:16:30.755 "state": "configuring", 00:16:30.755 "raid_level": "raid1", 00:16:30.755 "superblock": false, 00:16:30.755 "num_base_bdevs": 4, 00:16:30.755 "num_base_bdevs_discovered": 3, 00:16:30.755 "num_base_bdevs_operational": 4, 00:16:30.755 "base_bdevs_list": [ 00:16:30.755 { 00:16:30.755 "name": "BaseBdev1", 00:16:30.755 "uuid": "3fdb4a86-33e0-4c7d-85ad-91f9bb5fa6a3", 00:16:30.755 "is_configured": true, 00:16:30.755 "data_offset": 0, 00:16:30.755 "data_size": 65536 00:16:30.755 }, 00:16:30.755 { 00:16:30.755 "name": null, 00:16:30.755 "uuid": "9efae11e-a256-4ff3-b404-41e5d0450de1", 00:16:30.755 "is_configured": false, 00:16:30.755 "data_offset": 0, 00:16:30.755 "data_size": 65536 00:16:30.755 }, 00:16:30.755 { 00:16:30.755 "name": "BaseBdev3", 00:16:30.755 "uuid": "34c9dd1d-48db-4586-a602-a4b3144f31eb", 00:16:30.755 "is_configured": true, 00:16:30.755 "data_offset": 0, 00:16:30.755 "data_size": 65536 00:16:30.755 }, 00:16:30.755 { 00:16:30.755 "name": "BaseBdev4", 00:16:30.755 "uuid": "265c7911-e892-4dc5-8640-8a77c3831ba4", 00:16:30.755 "is_configured": true, 00:16:30.755 "data_offset": 0, 00:16:30.755 "data_size": 65536 00:16:30.755 } 00:16:30.755 ] 00:16:30.755 }' 00:16:30.755 12:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.755 12:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.322 12:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.322 12:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.322 12:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.322 12:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:31.322 12:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.322 12:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:31.322 12:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:31.322 12:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.322 12:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.322 [2024-11-25 12:15:27.202759] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:31.322 12:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.322 12:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:31.322 12:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:31.322 12:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:31.322 12:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:31.322 12:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:31.322 12:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:31.322 12:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.322 12:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.322 12:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.322 12:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.322 12:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.322 12:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.322 12:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.322 12:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.322 12:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.322 12:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.322 "name": "Existed_Raid", 00:16:31.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.322 "strip_size_kb": 0, 00:16:31.322 "state": "configuring", 00:16:31.322 "raid_level": "raid1", 00:16:31.322 "superblock": false, 00:16:31.322 "num_base_bdevs": 4, 00:16:31.322 "num_base_bdevs_discovered": 2, 00:16:31.322 "num_base_bdevs_operational": 4, 00:16:31.322 "base_bdevs_list": [ 00:16:31.322 { 00:16:31.322 "name": null, 00:16:31.322 "uuid": "3fdb4a86-33e0-4c7d-85ad-91f9bb5fa6a3", 00:16:31.322 "is_configured": false, 00:16:31.322 "data_offset": 0, 00:16:31.322 "data_size": 65536 00:16:31.322 }, 00:16:31.322 { 00:16:31.322 "name": null, 00:16:31.322 "uuid": "9efae11e-a256-4ff3-b404-41e5d0450de1", 00:16:31.322 "is_configured": false, 00:16:31.322 "data_offset": 0, 00:16:31.322 "data_size": 65536 00:16:31.322 }, 00:16:31.322 { 00:16:31.322 "name": "BaseBdev3", 00:16:31.322 "uuid": "34c9dd1d-48db-4586-a602-a4b3144f31eb", 00:16:31.322 "is_configured": true, 00:16:31.322 "data_offset": 0, 00:16:31.322 "data_size": 65536 00:16:31.322 }, 00:16:31.322 { 00:16:31.322 "name": "BaseBdev4", 00:16:31.322 "uuid": "265c7911-e892-4dc5-8640-8a77c3831ba4", 00:16:31.322 "is_configured": true, 00:16:31.322 "data_offset": 0, 00:16:31.322 "data_size": 65536 00:16:31.322 } 00:16:31.322 ] 00:16:31.322 }' 00:16:31.322 12:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.322 12:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.889 12:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:31.889 12:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.889 12:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.889 12:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.889 12:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.889 12:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:31.889 12:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:31.889 12:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.889 12:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.889 [2024-11-25 12:15:27.854539] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:31.889 12:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.889 12:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:31.889 12:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:31.889 12:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:31.889 12:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:31.889 12:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:31.889 12:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:31.889 12:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.889 12:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.889 12:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.889 12:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.889 12:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.889 12:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.889 12:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.889 12:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.889 12:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.889 12:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.889 "name": "Existed_Raid", 00:16:31.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.889 "strip_size_kb": 0, 00:16:31.889 "state": "configuring", 00:16:31.889 "raid_level": "raid1", 00:16:31.889 "superblock": false, 00:16:31.889 "num_base_bdevs": 4, 00:16:31.889 "num_base_bdevs_discovered": 3, 00:16:31.889 "num_base_bdevs_operational": 4, 00:16:31.889 "base_bdevs_list": [ 00:16:31.889 { 00:16:31.889 "name": null, 00:16:31.889 "uuid": "3fdb4a86-33e0-4c7d-85ad-91f9bb5fa6a3", 00:16:31.889 "is_configured": false, 00:16:31.889 "data_offset": 0, 00:16:31.889 "data_size": 65536 00:16:31.889 }, 00:16:31.889 { 00:16:31.889 "name": "BaseBdev2", 00:16:31.889 "uuid": "9efae11e-a256-4ff3-b404-41e5d0450de1", 00:16:31.889 "is_configured": true, 00:16:31.889 "data_offset": 0, 00:16:31.889 "data_size": 65536 00:16:31.889 }, 00:16:31.889 { 00:16:31.889 "name": "BaseBdev3", 00:16:31.889 "uuid": "34c9dd1d-48db-4586-a602-a4b3144f31eb", 00:16:31.889 "is_configured": true, 00:16:31.889 "data_offset": 0, 00:16:31.889 "data_size": 65536 00:16:31.889 }, 00:16:31.889 { 00:16:31.889 "name": "BaseBdev4", 00:16:31.889 "uuid": "265c7911-e892-4dc5-8640-8a77c3831ba4", 00:16:31.889 "is_configured": true, 00:16:31.889 "data_offset": 0, 00:16:31.889 "data_size": 65536 00:16:31.889 } 00:16:31.889 ] 00:16:31.889 }' 00:16:31.889 12:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.889 12:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.457 12:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.457 12:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.457 12:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.457 12:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:32.457 12:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.457 12:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:32.457 12:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.457 12:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.457 12:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.457 12:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:32.457 12:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.457 12:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3fdb4a86-33e0-4c7d-85ad-91f9bb5fa6a3 00:16:32.457 12:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.457 12:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.457 [2024-11-25 12:15:28.524487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:32.457 [2024-11-25 12:15:28.524540] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:32.457 [2024-11-25 12:15:28.524557] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:32.457 [2024-11-25 12:15:28.524894] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:32.457 [2024-11-25 12:15:28.525103] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:32.457 [2024-11-25 12:15:28.525119] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:32.457 [2024-11-25 12:15:28.525433] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:32.457 NewBaseBdev 00:16:32.457 12:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.457 12:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:32.457 12:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:32.457 12:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:32.457 12:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:32.457 12:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:32.457 12:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:32.457 12:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:32.457 12:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.457 12:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.457 12:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.457 12:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:32.457 12:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.457 12:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.715 [ 00:16:32.715 { 00:16:32.715 "name": "NewBaseBdev", 00:16:32.715 "aliases": [ 00:16:32.715 "3fdb4a86-33e0-4c7d-85ad-91f9bb5fa6a3" 00:16:32.715 ], 00:16:32.715 "product_name": "Malloc disk", 00:16:32.715 "block_size": 512, 00:16:32.715 "num_blocks": 65536, 00:16:32.715 "uuid": "3fdb4a86-33e0-4c7d-85ad-91f9bb5fa6a3", 00:16:32.715 "assigned_rate_limits": { 00:16:32.715 "rw_ios_per_sec": 0, 00:16:32.715 "rw_mbytes_per_sec": 0, 00:16:32.715 "r_mbytes_per_sec": 0, 00:16:32.715 "w_mbytes_per_sec": 0 00:16:32.715 }, 00:16:32.715 "claimed": true, 00:16:32.715 "claim_type": "exclusive_write", 00:16:32.715 "zoned": false, 00:16:32.715 "supported_io_types": { 00:16:32.715 "read": true, 00:16:32.715 "write": true, 00:16:32.715 "unmap": true, 00:16:32.715 "flush": true, 00:16:32.715 "reset": true, 00:16:32.715 "nvme_admin": false, 00:16:32.715 "nvme_io": false, 00:16:32.715 "nvme_io_md": false, 00:16:32.715 "write_zeroes": true, 00:16:32.715 "zcopy": true, 00:16:32.715 "get_zone_info": false, 00:16:32.715 "zone_management": false, 00:16:32.715 "zone_append": false, 00:16:32.715 "compare": false, 00:16:32.715 "compare_and_write": false, 00:16:32.715 "abort": true, 00:16:32.715 "seek_hole": false, 00:16:32.715 "seek_data": false, 00:16:32.715 "copy": true, 00:16:32.715 "nvme_iov_md": false 00:16:32.715 }, 00:16:32.715 "memory_domains": [ 00:16:32.715 { 00:16:32.715 "dma_device_id": "system", 00:16:32.715 "dma_device_type": 1 00:16:32.715 }, 00:16:32.715 { 00:16:32.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.715 "dma_device_type": 2 00:16:32.715 } 00:16:32.715 ], 00:16:32.715 "driver_specific": {} 00:16:32.715 } 00:16:32.715 ] 00:16:32.715 12:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.715 12:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:32.715 12:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:16:32.715 12:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:32.715 12:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:32.715 12:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:32.715 12:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:32.715 12:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:32.715 12:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.715 12:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.715 12:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.715 12:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.715 12:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.715 12:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:32.715 12:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.715 12:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.715 12:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.715 12:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.715 "name": "Existed_Raid", 00:16:32.715 "uuid": "f411c6b1-40f3-46b9-9f97-64e7f15e1ccd", 00:16:32.715 "strip_size_kb": 0, 00:16:32.715 "state": "online", 00:16:32.715 "raid_level": "raid1", 00:16:32.715 "superblock": false, 00:16:32.715 "num_base_bdevs": 4, 00:16:32.716 "num_base_bdevs_discovered": 4, 00:16:32.716 "num_base_bdevs_operational": 4, 00:16:32.716 "base_bdevs_list": [ 00:16:32.716 { 00:16:32.716 "name": "NewBaseBdev", 00:16:32.716 "uuid": "3fdb4a86-33e0-4c7d-85ad-91f9bb5fa6a3", 00:16:32.716 "is_configured": true, 00:16:32.716 "data_offset": 0, 00:16:32.716 "data_size": 65536 00:16:32.716 }, 00:16:32.716 { 00:16:32.716 "name": "BaseBdev2", 00:16:32.716 "uuid": "9efae11e-a256-4ff3-b404-41e5d0450de1", 00:16:32.716 "is_configured": true, 00:16:32.716 "data_offset": 0, 00:16:32.716 "data_size": 65536 00:16:32.716 }, 00:16:32.716 { 00:16:32.716 "name": "BaseBdev3", 00:16:32.716 "uuid": "34c9dd1d-48db-4586-a602-a4b3144f31eb", 00:16:32.716 "is_configured": true, 00:16:32.716 "data_offset": 0, 00:16:32.716 "data_size": 65536 00:16:32.716 }, 00:16:32.716 { 00:16:32.716 "name": "BaseBdev4", 00:16:32.716 "uuid": "265c7911-e892-4dc5-8640-8a77c3831ba4", 00:16:32.716 "is_configured": true, 00:16:32.716 "data_offset": 0, 00:16:32.716 "data_size": 65536 00:16:32.716 } 00:16:32.716 ] 00:16:32.716 }' 00:16:32.716 12:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.716 12:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.974 12:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:32.974 12:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:32.974 12:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:32.974 12:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:32.974 12:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:32.974 12:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:32.974 12:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:32.974 12:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.974 12:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.974 12:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:32.974 [2024-11-25 12:15:29.049099] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:33.261 12:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.261 12:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:33.261 "name": "Existed_Raid", 00:16:33.261 "aliases": [ 00:16:33.261 "f411c6b1-40f3-46b9-9f97-64e7f15e1ccd" 00:16:33.261 ], 00:16:33.261 "product_name": "Raid Volume", 00:16:33.261 "block_size": 512, 00:16:33.261 "num_blocks": 65536, 00:16:33.261 "uuid": "f411c6b1-40f3-46b9-9f97-64e7f15e1ccd", 00:16:33.262 "assigned_rate_limits": { 00:16:33.262 "rw_ios_per_sec": 0, 00:16:33.262 "rw_mbytes_per_sec": 0, 00:16:33.262 "r_mbytes_per_sec": 0, 00:16:33.262 "w_mbytes_per_sec": 0 00:16:33.262 }, 00:16:33.262 "claimed": false, 00:16:33.262 "zoned": false, 00:16:33.262 "supported_io_types": { 00:16:33.262 "read": true, 00:16:33.262 "write": true, 00:16:33.262 "unmap": false, 00:16:33.262 "flush": false, 00:16:33.262 "reset": true, 00:16:33.262 "nvme_admin": false, 00:16:33.262 "nvme_io": false, 00:16:33.262 "nvme_io_md": false, 00:16:33.262 "write_zeroes": true, 00:16:33.262 "zcopy": false, 00:16:33.262 "get_zone_info": false, 00:16:33.262 "zone_management": false, 00:16:33.262 "zone_append": false, 00:16:33.262 "compare": false, 00:16:33.262 "compare_and_write": false, 00:16:33.262 "abort": false, 00:16:33.262 "seek_hole": false, 00:16:33.262 "seek_data": false, 00:16:33.262 "copy": false, 00:16:33.262 "nvme_iov_md": false 00:16:33.262 }, 00:16:33.262 "memory_domains": [ 00:16:33.262 { 00:16:33.262 "dma_device_id": "system", 00:16:33.262 "dma_device_type": 1 00:16:33.262 }, 00:16:33.262 { 00:16:33.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.262 "dma_device_type": 2 00:16:33.262 }, 00:16:33.262 { 00:16:33.262 "dma_device_id": "system", 00:16:33.262 "dma_device_type": 1 00:16:33.262 }, 00:16:33.262 { 00:16:33.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.262 "dma_device_type": 2 00:16:33.262 }, 00:16:33.262 { 00:16:33.262 "dma_device_id": "system", 00:16:33.262 "dma_device_type": 1 00:16:33.262 }, 00:16:33.262 { 00:16:33.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.262 "dma_device_type": 2 00:16:33.262 }, 00:16:33.262 { 00:16:33.262 "dma_device_id": "system", 00:16:33.263 "dma_device_type": 1 00:16:33.263 }, 00:16:33.263 { 00:16:33.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.263 "dma_device_type": 2 00:16:33.263 } 00:16:33.263 ], 00:16:33.263 "driver_specific": { 00:16:33.263 "raid": { 00:16:33.263 "uuid": "f411c6b1-40f3-46b9-9f97-64e7f15e1ccd", 00:16:33.263 "strip_size_kb": 0, 00:16:33.263 "state": "online", 00:16:33.263 "raid_level": "raid1", 00:16:33.263 "superblock": false, 00:16:33.263 "num_base_bdevs": 4, 00:16:33.263 "num_base_bdevs_discovered": 4, 00:16:33.263 "num_base_bdevs_operational": 4, 00:16:33.263 "base_bdevs_list": [ 00:16:33.263 { 00:16:33.263 "name": "NewBaseBdev", 00:16:33.263 "uuid": "3fdb4a86-33e0-4c7d-85ad-91f9bb5fa6a3", 00:16:33.263 "is_configured": true, 00:16:33.263 "data_offset": 0, 00:16:33.263 "data_size": 65536 00:16:33.263 }, 00:16:33.263 { 00:16:33.263 "name": "BaseBdev2", 00:16:33.263 "uuid": "9efae11e-a256-4ff3-b404-41e5d0450de1", 00:16:33.263 "is_configured": true, 00:16:33.263 "data_offset": 0, 00:16:33.263 "data_size": 65536 00:16:33.263 }, 00:16:33.263 { 00:16:33.263 "name": "BaseBdev3", 00:16:33.263 "uuid": "34c9dd1d-48db-4586-a602-a4b3144f31eb", 00:16:33.263 "is_configured": true, 00:16:33.263 "data_offset": 0, 00:16:33.263 "data_size": 65536 00:16:33.263 }, 00:16:33.263 { 00:16:33.263 "name": "BaseBdev4", 00:16:33.263 "uuid": "265c7911-e892-4dc5-8640-8a77c3831ba4", 00:16:33.263 "is_configured": true, 00:16:33.263 "data_offset": 0, 00:16:33.263 "data_size": 65536 00:16:33.263 } 00:16:33.263 ] 00:16:33.263 } 00:16:33.263 } 00:16:33.263 }' 00:16:33.263 12:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:33.263 12:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:33.263 BaseBdev2 00:16:33.263 BaseBdev3 00:16:33.263 BaseBdev4' 00:16:33.264 12:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:33.264 12:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:33.264 12:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:33.264 12:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:33.264 12:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:33.264 12:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.264 12:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.264 12:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.264 12:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:33.264 12:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:33.264 12:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:33.264 12:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:33.264 12:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.264 12:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:33.264 12:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.264 12:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.264 12:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:33.264 12:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:33.264 12:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:33.264 12:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:33.264 12:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.266 12:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.266 12:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:33.266 12:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.536 12:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:33.536 12:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:33.536 12:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:33.536 12:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:33.536 12:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:33.536 12:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.536 12:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.536 12:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.536 12:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:33.536 12:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:33.536 12:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:33.536 12:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.536 12:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.536 [2024-11-25 12:15:29.408749] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:33.536 [2024-11-25 12:15:29.408783] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:33.536 [2024-11-25 12:15:29.408879] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:33.536 [2024-11-25 12:15:29.409243] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:33.536 [2024-11-25 12:15:29.409266] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:33.536 12:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.536 12:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73319 00:16:33.536 12:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 73319 ']' 00:16:33.536 12:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 73319 00:16:33.536 12:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:16:33.536 12:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:33.536 12:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73319 00:16:33.536 killing process with pid 73319 00:16:33.537 12:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:33.537 12:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:33.537 12:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73319' 00:16:33.537 12:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 73319 00:16:33.537 [2024-11-25 12:15:29.442145] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:33.537 12:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 73319 00:16:33.795 [2024-11-25 12:15:29.782259] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:34.731 12:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:16:34.731 00:16:34.731 real 0m12.554s 00:16:34.731 user 0m20.875s 00:16:34.731 sys 0m1.662s 00:16:34.731 12:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:34.731 ************************************ 00:16:34.731 END TEST raid_state_function_test 00:16:34.731 ************************************ 00:16:34.731 12:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.990 12:15:30 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:16:34.990 12:15:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:34.990 12:15:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:34.990 12:15:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:34.990 ************************************ 00:16:34.990 START TEST raid_state_function_test_sb 00:16:34.990 ************************************ 00:16:34.990 12:15:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:16:34.990 12:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:16:34.990 12:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:34.990 12:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:34.990 12:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:34.990 12:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:34.990 12:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:34.990 12:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:34.990 12:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:34.990 12:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:34.990 12:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:34.990 12:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:34.990 12:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:34.990 12:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:34.990 12:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:34.990 12:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:34.990 12:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:34.990 12:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:34.990 12:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:34.990 12:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:34.990 12:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:34.990 12:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:34.990 12:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:34.990 12:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:34.990 12:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:34.990 12:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:16:34.990 12:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:16:34.990 12:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:34.990 12:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:34.990 Process raid pid: 73996 00:16:34.990 12:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73996 00:16:34.990 12:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73996' 00:16:34.990 12:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73996 00:16:34.990 12:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:34.990 12:15:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 73996 ']' 00:16:34.990 12:15:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:34.990 12:15:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:34.990 12:15:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:34.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:34.991 12:15:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:34.991 12:15:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.991 [2024-11-25 12:15:30.986843] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:16:34.991 [2024-11-25 12:15:30.987230] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:35.249 [2024-11-25 12:15:31.165599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:35.249 [2024-11-25 12:15:31.292549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.507 [2024-11-25 12:15:31.499125] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:35.507 [2024-11-25 12:15:31.499393] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:36.172 12:15:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:36.172 12:15:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:36.172 12:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:36.172 12:15:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.172 12:15:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.172 [2024-11-25 12:15:31.945625] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:36.172 [2024-11-25 12:15:31.945689] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:36.172 [2024-11-25 12:15:31.945707] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:36.172 [2024-11-25 12:15:31.945724] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:36.172 [2024-11-25 12:15:31.945735] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:36.172 [2024-11-25 12:15:31.945749] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:36.172 [2024-11-25 12:15:31.945759] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:36.172 [2024-11-25 12:15:31.945774] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:36.172 12:15:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.173 12:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:36.173 12:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:36.173 12:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:36.173 12:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:36.173 12:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:36.173 12:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:36.173 12:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.173 12:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.173 12:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.173 12:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.173 12:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:36.173 12:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.173 12:15:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.173 12:15:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.173 12:15:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.173 12:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.173 "name": "Existed_Raid", 00:16:36.173 "uuid": "dc5ab8ae-7f72-47dd-af87-0ec6b1269ea2", 00:16:36.173 "strip_size_kb": 0, 00:16:36.173 "state": "configuring", 00:16:36.173 "raid_level": "raid1", 00:16:36.173 "superblock": true, 00:16:36.173 "num_base_bdevs": 4, 00:16:36.173 "num_base_bdevs_discovered": 0, 00:16:36.173 "num_base_bdevs_operational": 4, 00:16:36.173 "base_bdevs_list": [ 00:16:36.173 { 00:16:36.173 "name": "BaseBdev1", 00:16:36.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.173 "is_configured": false, 00:16:36.173 "data_offset": 0, 00:16:36.173 "data_size": 0 00:16:36.173 }, 00:16:36.173 { 00:16:36.173 "name": "BaseBdev2", 00:16:36.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.173 "is_configured": false, 00:16:36.173 "data_offset": 0, 00:16:36.173 "data_size": 0 00:16:36.173 }, 00:16:36.173 { 00:16:36.173 "name": "BaseBdev3", 00:16:36.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.173 "is_configured": false, 00:16:36.173 "data_offset": 0, 00:16:36.173 "data_size": 0 00:16:36.173 }, 00:16:36.173 { 00:16:36.173 "name": "BaseBdev4", 00:16:36.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.173 "is_configured": false, 00:16:36.173 "data_offset": 0, 00:16:36.173 "data_size": 0 00:16:36.173 } 00:16:36.173 ] 00:16:36.173 }' 00:16:36.173 12:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.173 12:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.432 12:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:36.432 12:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.432 12:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.432 [2024-11-25 12:15:32.461693] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:36.432 [2024-11-25 12:15:32.461745] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:36.432 12:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.432 12:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:36.432 12:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.432 12:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.432 [2024-11-25 12:15:32.469686] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:36.432 [2024-11-25 12:15:32.469882] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:36.432 [2024-11-25 12:15:32.469910] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:36.432 [2024-11-25 12:15:32.469929] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:36.432 [2024-11-25 12:15:32.469939] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:36.432 [2024-11-25 12:15:32.469954] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:36.432 [2024-11-25 12:15:32.469964] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:36.432 [2024-11-25 12:15:32.469978] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:36.433 12:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.433 12:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:36.433 12:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.433 12:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.433 [2024-11-25 12:15:32.514519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:36.433 BaseBdev1 00:16:36.433 12:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.433 12:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:36.433 12:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:36.433 12:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:36.433 12:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:36.433 12:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:36.433 12:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:36.433 12:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:36.433 12:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.433 12:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.692 12:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.692 12:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:36.692 12:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.692 12:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.692 [ 00:16:36.692 { 00:16:36.692 "name": "BaseBdev1", 00:16:36.692 "aliases": [ 00:16:36.692 "c5e4c877-7eb5-442c-b95d-79de58001dd1" 00:16:36.692 ], 00:16:36.692 "product_name": "Malloc disk", 00:16:36.692 "block_size": 512, 00:16:36.692 "num_blocks": 65536, 00:16:36.692 "uuid": "c5e4c877-7eb5-442c-b95d-79de58001dd1", 00:16:36.692 "assigned_rate_limits": { 00:16:36.692 "rw_ios_per_sec": 0, 00:16:36.692 "rw_mbytes_per_sec": 0, 00:16:36.692 "r_mbytes_per_sec": 0, 00:16:36.692 "w_mbytes_per_sec": 0 00:16:36.692 }, 00:16:36.692 "claimed": true, 00:16:36.692 "claim_type": "exclusive_write", 00:16:36.692 "zoned": false, 00:16:36.692 "supported_io_types": { 00:16:36.692 "read": true, 00:16:36.692 "write": true, 00:16:36.692 "unmap": true, 00:16:36.692 "flush": true, 00:16:36.692 "reset": true, 00:16:36.692 "nvme_admin": false, 00:16:36.692 "nvme_io": false, 00:16:36.692 "nvme_io_md": false, 00:16:36.692 "write_zeroes": true, 00:16:36.692 "zcopy": true, 00:16:36.692 "get_zone_info": false, 00:16:36.692 "zone_management": false, 00:16:36.692 "zone_append": false, 00:16:36.692 "compare": false, 00:16:36.692 "compare_and_write": false, 00:16:36.692 "abort": true, 00:16:36.692 "seek_hole": false, 00:16:36.692 "seek_data": false, 00:16:36.692 "copy": true, 00:16:36.692 "nvme_iov_md": false 00:16:36.692 }, 00:16:36.692 "memory_domains": [ 00:16:36.692 { 00:16:36.692 "dma_device_id": "system", 00:16:36.692 "dma_device_type": 1 00:16:36.692 }, 00:16:36.692 { 00:16:36.692 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.692 "dma_device_type": 2 00:16:36.692 } 00:16:36.692 ], 00:16:36.692 "driver_specific": {} 00:16:36.692 } 00:16:36.692 ] 00:16:36.692 12:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.692 12:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:36.692 12:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:36.692 12:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:36.692 12:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:36.692 12:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:36.692 12:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:36.692 12:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:36.692 12:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.692 12:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.692 12:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.692 12:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.692 12:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.692 12:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:36.692 12:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.692 12:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.692 12:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.692 12:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.692 "name": "Existed_Raid", 00:16:36.692 "uuid": "2ef397ae-c020-4e82-83f8-0553c2d7c8bf", 00:16:36.692 "strip_size_kb": 0, 00:16:36.692 "state": "configuring", 00:16:36.692 "raid_level": "raid1", 00:16:36.692 "superblock": true, 00:16:36.692 "num_base_bdevs": 4, 00:16:36.692 "num_base_bdevs_discovered": 1, 00:16:36.692 "num_base_bdevs_operational": 4, 00:16:36.692 "base_bdevs_list": [ 00:16:36.692 { 00:16:36.692 "name": "BaseBdev1", 00:16:36.692 "uuid": "c5e4c877-7eb5-442c-b95d-79de58001dd1", 00:16:36.692 "is_configured": true, 00:16:36.692 "data_offset": 2048, 00:16:36.692 "data_size": 63488 00:16:36.692 }, 00:16:36.692 { 00:16:36.692 "name": "BaseBdev2", 00:16:36.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.692 "is_configured": false, 00:16:36.692 "data_offset": 0, 00:16:36.692 "data_size": 0 00:16:36.692 }, 00:16:36.692 { 00:16:36.692 "name": "BaseBdev3", 00:16:36.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.692 "is_configured": false, 00:16:36.692 "data_offset": 0, 00:16:36.692 "data_size": 0 00:16:36.692 }, 00:16:36.692 { 00:16:36.692 "name": "BaseBdev4", 00:16:36.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.692 "is_configured": false, 00:16:36.692 "data_offset": 0, 00:16:36.692 "data_size": 0 00:16:36.692 } 00:16:36.692 ] 00:16:36.692 }' 00:16:36.692 12:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.692 12:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.260 12:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:37.260 12:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.260 12:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.260 [2024-11-25 12:15:33.054720] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:37.260 [2024-11-25 12:15:33.054926] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:37.260 12:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.260 12:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:37.260 12:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.260 12:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.261 [2024-11-25 12:15:33.062776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:37.261 [2024-11-25 12:15:33.065136] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:37.261 [2024-11-25 12:15:33.065186] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:37.261 [2024-11-25 12:15:33.065203] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:37.261 [2024-11-25 12:15:33.065220] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:37.261 [2024-11-25 12:15:33.065231] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:37.261 [2024-11-25 12:15:33.065246] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:37.261 12:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.261 12:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:37.261 12:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:37.261 12:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:37.261 12:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:37.261 12:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:37.261 12:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:37.261 12:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:37.261 12:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:37.261 12:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.261 12:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.261 12:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.261 12:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.261 12:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.261 12:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.261 12:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:37.261 12:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.261 12:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.261 12:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.261 "name": "Existed_Raid", 00:16:37.261 "uuid": "d5673d90-56c0-46fb-8615-8ce89ee9ecb2", 00:16:37.261 "strip_size_kb": 0, 00:16:37.261 "state": "configuring", 00:16:37.261 "raid_level": "raid1", 00:16:37.261 "superblock": true, 00:16:37.261 "num_base_bdevs": 4, 00:16:37.261 "num_base_bdevs_discovered": 1, 00:16:37.261 "num_base_bdevs_operational": 4, 00:16:37.261 "base_bdevs_list": [ 00:16:37.261 { 00:16:37.261 "name": "BaseBdev1", 00:16:37.261 "uuid": "c5e4c877-7eb5-442c-b95d-79de58001dd1", 00:16:37.261 "is_configured": true, 00:16:37.261 "data_offset": 2048, 00:16:37.261 "data_size": 63488 00:16:37.261 }, 00:16:37.261 { 00:16:37.261 "name": "BaseBdev2", 00:16:37.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.261 "is_configured": false, 00:16:37.261 "data_offset": 0, 00:16:37.261 "data_size": 0 00:16:37.261 }, 00:16:37.261 { 00:16:37.261 "name": "BaseBdev3", 00:16:37.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.261 "is_configured": false, 00:16:37.261 "data_offset": 0, 00:16:37.261 "data_size": 0 00:16:37.261 }, 00:16:37.261 { 00:16:37.261 "name": "BaseBdev4", 00:16:37.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.261 "is_configured": false, 00:16:37.261 "data_offset": 0, 00:16:37.261 "data_size": 0 00:16:37.261 } 00:16:37.261 ] 00:16:37.261 }' 00:16:37.261 12:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.261 12:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.520 12:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:37.520 12:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.520 12:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.779 [2024-11-25 12:15:33.625166] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:37.779 BaseBdev2 00:16:37.779 12:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.779 12:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:37.779 12:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:37.779 12:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:37.779 12:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:37.779 12:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:37.779 12:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:37.779 12:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:37.779 12:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.779 12:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.779 12:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.779 12:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:37.779 12:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.779 12:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.779 [ 00:16:37.779 { 00:16:37.779 "name": "BaseBdev2", 00:16:37.779 "aliases": [ 00:16:37.779 "8d08b9b7-1f8a-47ae-b34a-8a60dc4b5ac4" 00:16:37.779 ], 00:16:37.779 "product_name": "Malloc disk", 00:16:37.779 "block_size": 512, 00:16:37.779 "num_blocks": 65536, 00:16:37.779 "uuid": "8d08b9b7-1f8a-47ae-b34a-8a60dc4b5ac4", 00:16:37.779 "assigned_rate_limits": { 00:16:37.779 "rw_ios_per_sec": 0, 00:16:37.779 "rw_mbytes_per_sec": 0, 00:16:37.779 "r_mbytes_per_sec": 0, 00:16:37.779 "w_mbytes_per_sec": 0 00:16:37.779 }, 00:16:37.779 "claimed": true, 00:16:37.779 "claim_type": "exclusive_write", 00:16:37.779 "zoned": false, 00:16:37.779 "supported_io_types": { 00:16:37.779 "read": true, 00:16:37.779 "write": true, 00:16:37.779 "unmap": true, 00:16:37.779 "flush": true, 00:16:37.779 "reset": true, 00:16:37.779 "nvme_admin": false, 00:16:37.779 "nvme_io": false, 00:16:37.779 "nvme_io_md": false, 00:16:37.779 "write_zeroes": true, 00:16:37.779 "zcopy": true, 00:16:37.779 "get_zone_info": false, 00:16:37.779 "zone_management": false, 00:16:37.779 "zone_append": false, 00:16:37.779 "compare": false, 00:16:37.779 "compare_and_write": false, 00:16:37.779 "abort": true, 00:16:37.779 "seek_hole": false, 00:16:37.779 "seek_data": false, 00:16:37.779 "copy": true, 00:16:37.779 "nvme_iov_md": false 00:16:37.779 }, 00:16:37.779 "memory_domains": [ 00:16:37.779 { 00:16:37.779 "dma_device_id": "system", 00:16:37.779 "dma_device_type": 1 00:16:37.779 }, 00:16:37.779 { 00:16:37.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:37.779 "dma_device_type": 2 00:16:37.779 } 00:16:37.779 ], 00:16:37.779 "driver_specific": {} 00:16:37.779 } 00:16:37.779 ] 00:16:37.779 12:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.779 12:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:37.779 12:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:37.779 12:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:37.779 12:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:37.779 12:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:37.779 12:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:37.779 12:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:37.779 12:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:37.779 12:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:37.779 12:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.779 12:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.779 12:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.779 12:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.779 12:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:37.779 12:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.779 12:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.779 12:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.779 12:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.779 12:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.779 "name": "Existed_Raid", 00:16:37.779 "uuid": "d5673d90-56c0-46fb-8615-8ce89ee9ecb2", 00:16:37.779 "strip_size_kb": 0, 00:16:37.779 "state": "configuring", 00:16:37.779 "raid_level": "raid1", 00:16:37.779 "superblock": true, 00:16:37.779 "num_base_bdevs": 4, 00:16:37.779 "num_base_bdevs_discovered": 2, 00:16:37.779 "num_base_bdevs_operational": 4, 00:16:37.779 "base_bdevs_list": [ 00:16:37.780 { 00:16:37.780 "name": "BaseBdev1", 00:16:37.780 "uuid": "c5e4c877-7eb5-442c-b95d-79de58001dd1", 00:16:37.780 "is_configured": true, 00:16:37.780 "data_offset": 2048, 00:16:37.780 "data_size": 63488 00:16:37.780 }, 00:16:37.780 { 00:16:37.780 "name": "BaseBdev2", 00:16:37.780 "uuid": "8d08b9b7-1f8a-47ae-b34a-8a60dc4b5ac4", 00:16:37.780 "is_configured": true, 00:16:37.780 "data_offset": 2048, 00:16:37.780 "data_size": 63488 00:16:37.780 }, 00:16:37.780 { 00:16:37.780 "name": "BaseBdev3", 00:16:37.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.780 "is_configured": false, 00:16:37.780 "data_offset": 0, 00:16:37.780 "data_size": 0 00:16:37.780 }, 00:16:37.780 { 00:16:37.780 "name": "BaseBdev4", 00:16:37.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.780 "is_configured": false, 00:16:37.780 "data_offset": 0, 00:16:37.780 "data_size": 0 00:16:37.780 } 00:16:37.780 ] 00:16:37.780 }' 00:16:37.780 12:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.780 12:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.347 12:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:38.347 12:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.347 12:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.347 [2024-11-25 12:15:34.205722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:38.347 BaseBdev3 00:16:38.347 12:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.347 12:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:38.347 12:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:38.347 12:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:38.347 12:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:38.347 12:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:38.347 12:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:38.347 12:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:38.347 12:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.347 12:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.347 12:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.347 12:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:38.347 12:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.347 12:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.347 [ 00:16:38.347 { 00:16:38.347 "name": "BaseBdev3", 00:16:38.347 "aliases": [ 00:16:38.347 "71c9de3c-134d-474d-b4d8-310de0e09c2d" 00:16:38.347 ], 00:16:38.347 "product_name": "Malloc disk", 00:16:38.347 "block_size": 512, 00:16:38.347 "num_blocks": 65536, 00:16:38.347 "uuid": "71c9de3c-134d-474d-b4d8-310de0e09c2d", 00:16:38.347 "assigned_rate_limits": { 00:16:38.347 "rw_ios_per_sec": 0, 00:16:38.347 "rw_mbytes_per_sec": 0, 00:16:38.347 "r_mbytes_per_sec": 0, 00:16:38.347 "w_mbytes_per_sec": 0 00:16:38.347 }, 00:16:38.347 "claimed": true, 00:16:38.347 "claim_type": "exclusive_write", 00:16:38.347 "zoned": false, 00:16:38.347 "supported_io_types": { 00:16:38.347 "read": true, 00:16:38.347 "write": true, 00:16:38.347 "unmap": true, 00:16:38.347 "flush": true, 00:16:38.347 "reset": true, 00:16:38.347 "nvme_admin": false, 00:16:38.347 "nvme_io": false, 00:16:38.347 "nvme_io_md": false, 00:16:38.347 "write_zeroes": true, 00:16:38.347 "zcopy": true, 00:16:38.347 "get_zone_info": false, 00:16:38.347 "zone_management": false, 00:16:38.347 "zone_append": false, 00:16:38.347 "compare": false, 00:16:38.347 "compare_and_write": false, 00:16:38.347 "abort": true, 00:16:38.347 "seek_hole": false, 00:16:38.347 "seek_data": false, 00:16:38.347 "copy": true, 00:16:38.347 "nvme_iov_md": false 00:16:38.347 }, 00:16:38.347 "memory_domains": [ 00:16:38.347 { 00:16:38.347 "dma_device_id": "system", 00:16:38.347 "dma_device_type": 1 00:16:38.347 }, 00:16:38.347 { 00:16:38.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.347 "dma_device_type": 2 00:16:38.347 } 00:16:38.347 ], 00:16:38.347 "driver_specific": {} 00:16:38.347 } 00:16:38.347 ] 00:16:38.347 12:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.347 12:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:38.347 12:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:38.347 12:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:38.347 12:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:38.347 12:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:38.347 12:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:38.347 12:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:38.347 12:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:38.347 12:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:38.347 12:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.347 12:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.347 12:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.347 12:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.347 12:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.347 12:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:38.347 12:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.347 12:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.347 12:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.347 12:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.347 "name": "Existed_Raid", 00:16:38.347 "uuid": "d5673d90-56c0-46fb-8615-8ce89ee9ecb2", 00:16:38.347 "strip_size_kb": 0, 00:16:38.347 "state": "configuring", 00:16:38.347 "raid_level": "raid1", 00:16:38.347 "superblock": true, 00:16:38.347 "num_base_bdevs": 4, 00:16:38.347 "num_base_bdevs_discovered": 3, 00:16:38.347 "num_base_bdevs_operational": 4, 00:16:38.347 "base_bdevs_list": [ 00:16:38.347 { 00:16:38.347 "name": "BaseBdev1", 00:16:38.347 "uuid": "c5e4c877-7eb5-442c-b95d-79de58001dd1", 00:16:38.347 "is_configured": true, 00:16:38.347 "data_offset": 2048, 00:16:38.347 "data_size": 63488 00:16:38.347 }, 00:16:38.347 { 00:16:38.347 "name": "BaseBdev2", 00:16:38.347 "uuid": "8d08b9b7-1f8a-47ae-b34a-8a60dc4b5ac4", 00:16:38.347 "is_configured": true, 00:16:38.347 "data_offset": 2048, 00:16:38.347 "data_size": 63488 00:16:38.347 }, 00:16:38.347 { 00:16:38.347 "name": "BaseBdev3", 00:16:38.347 "uuid": "71c9de3c-134d-474d-b4d8-310de0e09c2d", 00:16:38.347 "is_configured": true, 00:16:38.347 "data_offset": 2048, 00:16:38.347 "data_size": 63488 00:16:38.347 }, 00:16:38.347 { 00:16:38.347 "name": "BaseBdev4", 00:16:38.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.347 "is_configured": false, 00:16:38.347 "data_offset": 0, 00:16:38.347 "data_size": 0 00:16:38.347 } 00:16:38.347 ] 00:16:38.347 }' 00:16:38.347 12:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.347 12:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.915 12:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:38.916 12:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.916 12:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.916 [2024-11-25 12:15:34.828365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:38.916 [2024-11-25 12:15:34.828682] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:38.916 [2024-11-25 12:15:34.828708] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:38.916 BaseBdev4 00:16:38.916 [2024-11-25 12:15:34.829074] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:38.916 [2024-11-25 12:15:34.829288] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:38.916 [2024-11-25 12:15:34.829310] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:38.916 [2024-11-25 12:15:34.829513] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:38.916 12:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.916 12:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:38.916 12:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:38.916 12:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:38.916 12:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:38.916 12:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:38.916 12:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:38.916 12:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:38.916 12:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.916 12:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.916 12:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.916 12:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:38.916 12:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.916 12:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.916 [ 00:16:38.916 { 00:16:38.916 "name": "BaseBdev4", 00:16:38.916 "aliases": [ 00:16:38.916 "cb54962c-293d-4a3b-a5c2-d039382a5717" 00:16:38.916 ], 00:16:38.916 "product_name": "Malloc disk", 00:16:38.916 "block_size": 512, 00:16:38.916 "num_blocks": 65536, 00:16:38.916 "uuid": "cb54962c-293d-4a3b-a5c2-d039382a5717", 00:16:38.916 "assigned_rate_limits": { 00:16:38.916 "rw_ios_per_sec": 0, 00:16:38.916 "rw_mbytes_per_sec": 0, 00:16:38.916 "r_mbytes_per_sec": 0, 00:16:38.916 "w_mbytes_per_sec": 0 00:16:38.916 }, 00:16:38.916 "claimed": true, 00:16:38.916 "claim_type": "exclusive_write", 00:16:38.916 "zoned": false, 00:16:38.916 "supported_io_types": { 00:16:38.916 "read": true, 00:16:38.916 "write": true, 00:16:38.916 "unmap": true, 00:16:38.916 "flush": true, 00:16:38.916 "reset": true, 00:16:38.916 "nvme_admin": false, 00:16:38.916 "nvme_io": false, 00:16:38.916 "nvme_io_md": false, 00:16:38.916 "write_zeroes": true, 00:16:38.916 "zcopy": true, 00:16:38.916 "get_zone_info": false, 00:16:38.916 "zone_management": false, 00:16:38.916 "zone_append": false, 00:16:38.916 "compare": false, 00:16:38.916 "compare_and_write": false, 00:16:38.916 "abort": true, 00:16:38.916 "seek_hole": false, 00:16:38.916 "seek_data": false, 00:16:38.916 "copy": true, 00:16:38.916 "nvme_iov_md": false 00:16:38.916 }, 00:16:38.916 "memory_domains": [ 00:16:38.916 { 00:16:38.916 "dma_device_id": "system", 00:16:38.916 "dma_device_type": 1 00:16:38.916 }, 00:16:38.916 { 00:16:38.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.916 "dma_device_type": 2 00:16:38.916 } 00:16:38.916 ], 00:16:38.916 "driver_specific": {} 00:16:38.916 } 00:16:38.916 ] 00:16:38.916 12:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.916 12:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:38.916 12:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:38.916 12:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:38.916 12:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:16:38.916 12:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:38.916 12:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:38.916 12:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:38.916 12:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:38.916 12:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:38.916 12:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.916 12:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.916 12:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.916 12:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.916 12:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.916 12:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:38.916 12:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.916 12:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.916 12:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.916 12:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.916 "name": "Existed_Raid", 00:16:38.916 "uuid": "d5673d90-56c0-46fb-8615-8ce89ee9ecb2", 00:16:38.916 "strip_size_kb": 0, 00:16:38.916 "state": "online", 00:16:38.916 "raid_level": "raid1", 00:16:38.916 "superblock": true, 00:16:38.916 "num_base_bdevs": 4, 00:16:38.916 "num_base_bdevs_discovered": 4, 00:16:38.916 "num_base_bdevs_operational": 4, 00:16:38.916 "base_bdevs_list": [ 00:16:38.916 { 00:16:38.916 "name": "BaseBdev1", 00:16:38.916 "uuid": "c5e4c877-7eb5-442c-b95d-79de58001dd1", 00:16:38.916 "is_configured": true, 00:16:38.916 "data_offset": 2048, 00:16:38.916 "data_size": 63488 00:16:38.916 }, 00:16:38.916 { 00:16:38.916 "name": "BaseBdev2", 00:16:38.916 "uuid": "8d08b9b7-1f8a-47ae-b34a-8a60dc4b5ac4", 00:16:38.916 "is_configured": true, 00:16:38.916 "data_offset": 2048, 00:16:38.916 "data_size": 63488 00:16:38.917 }, 00:16:38.917 { 00:16:38.917 "name": "BaseBdev3", 00:16:38.917 "uuid": "71c9de3c-134d-474d-b4d8-310de0e09c2d", 00:16:38.917 "is_configured": true, 00:16:38.917 "data_offset": 2048, 00:16:38.917 "data_size": 63488 00:16:38.917 }, 00:16:38.917 { 00:16:38.917 "name": "BaseBdev4", 00:16:38.917 "uuid": "cb54962c-293d-4a3b-a5c2-d039382a5717", 00:16:38.917 "is_configured": true, 00:16:38.917 "data_offset": 2048, 00:16:38.917 "data_size": 63488 00:16:38.917 } 00:16:38.917 ] 00:16:38.917 }' 00:16:38.917 12:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.917 12:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.485 12:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:39.485 12:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:39.485 12:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:39.485 12:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:39.485 12:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:39.485 12:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:39.485 12:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:39.485 12:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:39.485 12:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.485 12:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.485 [2024-11-25 12:15:35.340979] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:39.485 12:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.485 12:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:39.485 "name": "Existed_Raid", 00:16:39.485 "aliases": [ 00:16:39.485 "d5673d90-56c0-46fb-8615-8ce89ee9ecb2" 00:16:39.485 ], 00:16:39.485 "product_name": "Raid Volume", 00:16:39.485 "block_size": 512, 00:16:39.485 "num_blocks": 63488, 00:16:39.485 "uuid": "d5673d90-56c0-46fb-8615-8ce89ee9ecb2", 00:16:39.485 "assigned_rate_limits": { 00:16:39.485 "rw_ios_per_sec": 0, 00:16:39.485 "rw_mbytes_per_sec": 0, 00:16:39.485 "r_mbytes_per_sec": 0, 00:16:39.485 "w_mbytes_per_sec": 0 00:16:39.485 }, 00:16:39.485 "claimed": false, 00:16:39.485 "zoned": false, 00:16:39.485 "supported_io_types": { 00:16:39.485 "read": true, 00:16:39.485 "write": true, 00:16:39.485 "unmap": false, 00:16:39.485 "flush": false, 00:16:39.485 "reset": true, 00:16:39.485 "nvme_admin": false, 00:16:39.485 "nvme_io": false, 00:16:39.485 "nvme_io_md": false, 00:16:39.485 "write_zeroes": true, 00:16:39.485 "zcopy": false, 00:16:39.485 "get_zone_info": false, 00:16:39.485 "zone_management": false, 00:16:39.485 "zone_append": false, 00:16:39.485 "compare": false, 00:16:39.485 "compare_and_write": false, 00:16:39.485 "abort": false, 00:16:39.485 "seek_hole": false, 00:16:39.485 "seek_data": false, 00:16:39.485 "copy": false, 00:16:39.485 "nvme_iov_md": false 00:16:39.485 }, 00:16:39.485 "memory_domains": [ 00:16:39.485 { 00:16:39.485 "dma_device_id": "system", 00:16:39.485 "dma_device_type": 1 00:16:39.485 }, 00:16:39.485 { 00:16:39.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:39.485 "dma_device_type": 2 00:16:39.485 }, 00:16:39.485 { 00:16:39.485 "dma_device_id": "system", 00:16:39.485 "dma_device_type": 1 00:16:39.485 }, 00:16:39.485 { 00:16:39.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:39.485 "dma_device_type": 2 00:16:39.485 }, 00:16:39.485 { 00:16:39.485 "dma_device_id": "system", 00:16:39.485 "dma_device_type": 1 00:16:39.485 }, 00:16:39.485 { 00:16:39.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:39.485 "dma_device_type": 2 00:16:39.485 }, 00:16:39.485 { 00:16:39.485 "dma_device_id": "system", 00:16:39.485 "dma_device_type": 1 00:16:39.485 }, 00:16:39.485 { 00:16:39.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:39.485 "dma_device_type": 2 00:16:39.485 } 00:16:39.485 ], 00:16:39.485 "driver_specific": { 00:16:39.485 "raid": { 00:16:39.485 "uuid": "d5673d90-56c0-46fb-8615-8ce89ee9ecb2", 00:16:39.485 "strip_size_kb": 0, 00:16:39.485 "state": "online", 00:16:39.485 "raid_level": "raid1", 00:16:39.485 "superblock": true, 00:16:39.485 "num_base_bdevs": 4, 00:16:39.485 "num_base_bdevs_discovered": 4, 00:16:39.485 "num_base_bdevs_operational": 4, 00:16:39.485 "base_bdevs_list": [ 00:16:39.485 { 00:16:39.485 "name": "BaseBdev1", 00:16:39.485 "uuid": "c5e4c877-7eb5-442c-b95d-79de58001dd1", 00:16:39.485 "is_configured": true, 00:16:39.485 "data_offset": 2048, 00:16:39.485 "data_size": 63488 00:16:39.485 }, 00:16:39.485 { 00:16:39.485 "name": "BaseBdev2", 00:16:39.485 "uuid": "8d08b9b7-1f8a-47ae-b34a-8a60dc4b5ac4", 00:16:39.485 "is_configured": true, 00:16:39.485 "data_offset": 2048, 00:16:39.485 "data_size": 63488 00:16:39.485 }, 00:16:39.485 { 00:16:39.485 "name": "BaseBdev3", 00:16:39.485 "uuid": "71c9de3c-134d-474d-b4d8-310de0e09c2d", 00:16:39.485 "is_configured": true, 00:16:39.486 "data_offset": 2048, 00:16:39.486 "data_size": 63488 00:16:39.486 }, 00:16:39.486 { 00:16:39.486 "name": "BaseBdev4", 00:16:39.486 "uuid": "cb54962c-293d-4a3b-a5c2-d039382a5717", 00:16:39.486 "is_configured": true, 00:16:39.486 "data_offset": 2048, 00:16:39.486 "data_size": 63488 00:16:39.486 } 00:16:39.486 ] 00:16:39.486 } 00:16:39.486 } 00:16:39.486 }' 00:16:39.486 12:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:39.486 12:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:39.486 BaseBdev2 00:16:39.486 BaseBdev3 00:16:39.486 BaseBdev4' 00:16:39.486 12:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:39.486 12:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:39.486 12:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:39.486 12:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:39.486 12:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.486 12:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:39.486 12:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.486 12:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.486 12:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:39.486 12:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:39.486 12:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:39.486 12:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:39.486 12:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.486 12:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.486 12:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:39.486 12:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.745 12:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:39.745 12:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:39.745 12:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:39.745 12:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:39.745 12:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:39.745 12:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.745 12:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.745 12:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.745 12:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:39.745 12:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:39.745 12:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:39.745 12:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:39.745 12:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:39.745 12:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.745 12:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.745 12:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.745 12:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:39.746 12:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:39.746 12:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:39.746 12:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.746 12:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.746 [2024-11-25 12:15:35.688689] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:39.746 12:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.746 12:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:39.746 12:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:16:39.746 12:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:39.746 12:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:16:39.746 12:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:39.746 12:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:16:39.746 12:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:39.746 12:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:39.746 12:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:39.746 12:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:39.746 12:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:39.746 12:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:39.746 12:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:39.746 12:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:39.746 12:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:39.746 12:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:39.746 12:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.746 12:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.746 12:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.746 12:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.005 12:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.005 "name": "Existed_Raid", 00:16:40.005 "uuid": "d5673d90-56c0-46fb-8615-8ce89ee9ecb2", 00:16:40.005 "strip_size_kb": 0, 00:16:40.005 "state": "online", 00:16:40.005 "raid_level": "raid1", 00:16:40.005 "superblock": true, 00:16:40.005 "num_base_bdevs": 4, 00:16:40.005 "num_base_bdevs_discovered": 3, 00:16:40.005 "num_base_bdevs_operational": 3, 00:16:40.005 "base_bdevs_list": [ 00:16:40.005 { 00:16:40.005 "name": null, 00:16:40.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.005 "is_configured": false, 00:16:40.005 "data_offset": 0, 00:16:40.005 "data_size": 63488 00:16:40.005 }, 00:16:40.005 { 00:16:40.005 "name": "BaseBdev2", 00:16:40.005 "uuid": "8d08b9b7-1f8a-47ae-b34a-8a60dc4b5ac4", 00:16:40.005 "is_configured": true, 00:16:40.005 "data_offset": 2048, 00:16:40.005 "data_size": 63488 00:16:40.005 }, 00:16:40.005 { 00:16:40.005 "name": "BaseBdev3", 00:16:40.005 "uuid": "71c9de3c-134d-474d-b4d8-310de0e09c2d", 00:16:40.005 "is_configured": true, 00:16:40.005 "data_offset": 2048, 00:16:40.005 "data_size": 63488 00:16:40.005 }, 00:16:40.005 { 00:16:40.005 "name": "BaseBdev4", 00:16:40.005 "uuid": "cb54962c-293d-4a3b-a5c2-d039382a5717", 00:16:40.005 "is_configured": true, 00:16:40.005 "data_offset": 2048, 00:16:40.005 "data_size": 63488 00:16:40.005 } 00:16:40.005 ] 00:16:40.005 }' 00:16:40.005 12:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.005 12:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.268 12:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:40.268 12:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:40.268 12:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.268 12:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.268 12:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:40.268 12:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.268 12:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.526 12:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:40.526 12:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:40.526 12:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:40.526 12:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.526 12:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.526 [2024-11-25 12:15:36.380987] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:40.526 12:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.526 12:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:40.526 12:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:40.526 12:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.526 12:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:40.526 12:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.526 12:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.526 12:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.526 12:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:40.526 12:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:40.526 12:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:40.526 12:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.526 12:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.526 [2024-11-25 12:15:36.525244] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:40.526 12:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.526 12:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:40.526 12:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:40.527 12:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.527 12:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:40.527 12:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.787 12:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.787 12:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.787 12:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:40.787 12:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:40.787 12:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:40.787 12:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.787 12:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.787 [2024-11-25 12:15:36.665572] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:40.787 [2024-11-25 12:15:36.665706] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:40.787 [2024-11-25 12:15:36.748042] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:40.787 [2024-11-25 12:15:36.748120] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:40.787 [2024-11-25 12:15:36.748140] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:40.787 12:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.787 12:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:40.787 12:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:40.787 12:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:40.787 12:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.787 12:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.787 12:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.787 12:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.787 12:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:40.787 12:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:40.787 12:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:40.787 12:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:40.787 12:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:40.787 12:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:40.787 12:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.787 12:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.787 BaseBdev2 00:16:40.787 12:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.787 12:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:40.787 12:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:40.787 12:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:40.787 12:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:40.787 12:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:40.787 12:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:40.787 12:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:40.787 12:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.787 12:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.787 12:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.787 12:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:40.787 12:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.787 12:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.787 [ 00:16:40.787 { 00:16:40.787 "name": "BaseBdev2", 00:16:40.787 "aliases": [ 00:16:40.787 "aaf555e4-6c75-4f4f-a470-e8930c26f627" 00:16:40.787 ], 00:16:40.787 "product_name": "Malloc disk", 00:16:40.787 "block_size": 512, 00:16:40.787 "num_blocks": 65536, 00:16:40.787 "uuid": "aaf555e4-6c75-4f4f-a470-e8930c26f627", 00:16:40.787 "assigned_rate_limits": { 00:16:40.787 "rw_ios_per_sec": 0, 00:16:40.787 "rw_mbytes_per_sec": 0, 00:16:40.787 "r_mbytes_per_sec": 0, 00:16:40.787 "w_mbytes_per_sec": 0 00:16:40.787 }, 00:16:40.787 "claimed": false, 00:16:40.787 "zoned": false, 00:16:40.787 "supported_io_types": { 00:16:40.787 "read": true, 00:16:40.787 "write": true, 00:16:40.787 "unmap": true, 00:16:40.787 "flush": true, 00:16:40.787 "reset": true, 00:16:40.787 "nvme_admin": false, 00:16:40.787 "nvme_io": false, 00:16:40.787 "nvme_io_md": false, 00:16:40.787 "write_zeroes": true, 00:16:40.787 "zcopy": true, 00:16:40.787 "get_zone_info": false, 00:16:40.787 "zone_management": false, 00:16:40.787 "zone_append": false, 00:16:40.787 "compare": false, 00:16:40.787 "compare_and_write": false, 00:16:40.787 "abort": true, 00:16:40.787 "seek_hole": false, 00:16:40.787 "seek_data": false, 00:16:40.787 "copy": true, 00:16:40.787 "nvme_iov_md": false 00:16:40.787 }, 00:16:40.787 "memory_domains": [ 00:16:40.787 { 00:16:40.787 "dma_device_id": "system", 00:16:40.787 "dma_device_type": 1 00:16:40.787 }, 00:16:40.787 { 00:16:40.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:40.787 "dma_device_type": 2 00:16:40.787 } 00:16:40.787 ], 00:16:40.787 "driver_specific": {} 00:16:40.788 } 00:16:40.788 ] 00:16:40.788 12:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.788 12:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:40.788 12:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:40.788 12:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:40.788 12:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:40.788 12:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.788 12:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.049 BaseBdev3 00:16:41.049 12:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.049 12:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:41.049 12:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:41.049 12:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:41.049 12:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:41.049 12:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:41.049 12:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:41.049 12:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:41.049 12:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.049 12:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.049 12:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.049 12:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:41.049 12:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.049 12:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.049 [ 00:16:41.049 { 00:16:41.049 "name": "BaseBdev3", 00:16:41.049 "aliases": [ 00:16:41.049 "8179b713-f689-4c51-bfd2-345b2dc6f8b5" 00:16:41.049 ], 00:16:41.049 "product_name": "Malloc disk", 00:16:41.049 "block_size": 512, 00:16:41.049 "num_blocks": 65536, 00:16:41.049 "uuid": "8179b713-f689-4c51-bfd2-345b2dc6f8b5", 00:16:41.049 "assigned_rate_limits": { 00:16:41.049 "rw_ios_per_sec": 0, 00:16:41.049 "rw_mbytes_per_sec": 0, 00:16:41.049 "r_mbytes_per_sec": 0, 00:16:41.049 "w_mbytes_per_sec": 0 00:16:41.049 }, 00:16:41.049 "claimed": false, 00:16:41.049 "zoned": false, 00:16:41.049 "supported_io_types": { 00:16:41.049 "read": true, 00:16:41.049 "write": true, 00:16:41.049 "unmap": true, 00:16:41.049 "flush": true, 00:16:41.049 "reset": true, 00:16:41.049 "nvme_admin": false, 00:16:41.049 "nvme_io": false, 00:16:41.049 "nvme_io_md": false, 00:16:41.049 "write_zeroes": true, 00:16:41.049 "zcopy": true, 00:16:41.049 "get_zone_info": false, 00:16:41.049 "zone_management": false, 00:16:41.049 "zone_append": false, 00:16:41.049 "compare": false, 00:16:41.049 "compare_and_write": false, 00:16:41.049 "abort": true, 00:16:41.049 "seek_hole": false, 00:16:41.049 "seek_data": false, 00:16:41.049 "copy": true, 00:16:41.049 "nvme_iov_md": false 00:16:41.049 }, 00:16:41.049 "memory_domains": [ 00:16:41.049 { 00:16:41.049 "dma_device_id": "system", 00:16:41.049 "dma_device_type": 1 00:16:41.049 }, 00:16:41.049 { 00:16:41.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:41.049 "dma_device_type": 2 00:16:41.049 } 00:16:41.049 ], 00:16:41.049 "driver_specific": {} 00:16:41.049 } 00:16:41.049 ] 00:16:41.049 12:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.049 12:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:41.049 12:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:41.049 12:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:41.049 12:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:41.049 12:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.049 12:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.049 BaseBdev4 00:16:41.049 12:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.049 12:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:41.049 12:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:41.049 12:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:41.049 12:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:41.049 12:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:41.049 12:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:41.049 12:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:41.049 12:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.049 12:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.049 12:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.049 12:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:41.049 12:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.049 12:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.049 [ 00:16:41.049 { 00:16:41.049 "name": "BaseBdev4", 00:16:41.049 "aliases": [ 00:16:41.049 "4c3899b6-4bd8-4beb-8c8d-5050244f8721" 00:16:41.049 ], 00:16:41.049 "product_name": "Malloc disk", 00:16:41.049 "block_size": 512, 00:16:41.049 "num_blocks": 65536, 00:16:41.049 "uuid": "4c3899b6-4bd8-4beb-8c8d-5050244f8721", 00:16:41.049 "assigned_rate_limits": { 00:16:41.049 "rw_ios_per_sec": 0, 00:16:41.049 "rw_mbytes_per_sec": 0, 00:16:41.049 "r_mbytes_per_sec": 0, 00:16:41.049 "w_mbytes_per_sec": 0 00:16:41.049 }, 00:16:41.049 "claimed": false, 00:16:41.049 "zoned": false, 00:16:41.049 "supported_io_types": { 00:16:41.049 "read": true, 00:16:41.049 "write": true, 00:16:41.049 "unmap": true, 00:16:41.049 "flush": true, 00:16:41.049 "reset": true, 00:16:41.049 "nvme_admin": false, 00:16:41.049 "nvme_io": false, 00:16:41.049 "nvme_io_md": false, 00:16:41.049 "write_zeroes": true, 00:16:41.049 "zcopy": true, 00:16:41.049 "get_zone_info": false, 00:16:41.049 "zone_management": false, 00:16:41.049 "zone_append": false, 00:16:41.049 "compare": false, 00:16:41.049 "compare_and_write": false, 00:16:41.049 "abort": true, 00:16:41.049 "seek_hole": false, 00:16:41.049 "seek_data": false, 00:16:41.049 "copy": true, 00:16:41.049 "nvme_iov_md": false 00:16:41.049 }, 00:16:41.049 "memory_domains": [ 00:16:41.049 { 00:16:41.049 "dma_device_id": "system", 00:16:41.049 "dma_device_type": 1 00:16:41.049 }, 00:16:41.049 { 00:16:41.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:41.049 "dma_device_type": 2 00:16:41.049 } 00:16:41.049 ], 00:16:41.049 "driver_specific": {} 00:16:41.050 } 00:16:41.050 ] 00:16:41.050 12:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.050 12:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:41.050 12:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:41.050 12:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:41.050 12:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:41.050 12:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.050 12:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.050 [2024-11-25 12:15:37.031797] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:41.050 [2024-11-25 12:15:37.031867] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:41.050 [2024-11-25 12:15:37.031895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:41.050 [2024-11-25 12:15:37.034367] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:41.050 [2024-11-25 12:15:37.034441] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:41.050 12:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.050 12:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:41.050 12:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:41.050 12:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:41.050 12:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:41.050 12:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:41.050 12:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:41.050 12:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.050 12:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.050 12:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.050 12:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.050 12:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.050 12:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.050 12:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:41.050 12:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.050 12:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.050 12:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.050 "name": "Existed_Raid", 00:16:41.050 "uuid": "c70eea82-7258-42ee-8dcf-9e37625dae2c", 00:16:41.050 "strip_size_kb": 0, 00:16:41.050 "state": "configuring", 00:16:41.050 "raid_level": "raid1", 00:16:41.050 "superblock": true, 00:16:41.050 "num_base_bdevs": 4, 00:16:41.050 "num_base_bdevs_discovered": 3, 00:16:41.050 "num_base_bdevs_operational": 4, 00:16:41.050 "base_bdevs_list": [ 00:16:41.050 { 00:16:41.050 "name": "BaseBdev1", 00:16:41.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.050 "is_configured": false, 00:16:41.050 "data_offset": 0, 00:16:41.050 "data_size": 0 00:16:41.050 }, 00:16:41.050 { 00:16:41.050 "name": "BaseBdev2", 00:16:41.050 "uuid": "aaf555e4-6c75-4f4f-a470-e8930c26f627", 00:16:41.050 "is_configured": true, 00:16:41.050 "data_offset": 2048, 00:16:41.050 "data_size": 63488 00:16:41.050 }, 00:16:41.050 { 00:16:41.050 "name": "BaseBdev3", 00:16:41.050 "uuid": "8179b713-f689-4c51-bfd2-345b2dc6f8b5", 00:16:41.050 "is_configured": true, 00:16:41.050 "data_offset": 2048, 00:16:41.050 "data_size": 63488 00:16:41.050 }, 00:16:41.050 { 00:16:41.050 "name": "BaseBdev4", 00:16:41.050 "uuid": "4c3899b6-4bd8-4beb-8c8d-5050244f8721", 00:16:41.050 "is_configured": true, 00:16:41.050 "data_offset": 2048, 00:16:41.050 "data_size": 63488 00:16:41.050 } 00:16:41.050 ] 00:16:41.050 }' 00:16:41.050 12:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.050 12:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.618 12:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:41.618 12:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.618 12:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.618 [2024-11-25 12:15:37.571993] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:41.618 12:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.618 12:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:41.618 12:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:41.618 12:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:41.618 12:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:41.618 12:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:41.618 12:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:41.618 12:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.618 12:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.618 12:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.618 12:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.618 12:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.618 12:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.618 12:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.618 12:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:41.618 12:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.618 12:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.618 "name": "Existed_Raid", 00:16:41.618 "uuid": "c70eea82-7258-42ee-8dcf-9e37625dae2c", 00:16:41.618 "strip_size_kb": 0, 00:16:41.618 "state": "configuring", 00:16:41.618 "raid_level": "raid1", 00:16:41.618 "superblock": true, 00:16:41.618 "num_base_bdevs": 4, 00:16:41.618 "num_base_bdevs_discovered": 2, 00:16:41.618 "num_base_bdevs_operational": 4, 00:16:41.618 "base_bdevs_list": [ 00:16:41.618 { 00:16:41.618 "name": "BaseBdev1", 00:16:41.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.618 "is_configured": false, 00:16:41.618 "data_offset": 0, 00:16:41.618 "data_size": 0 00:16:41.618 }, 00:16:41.618 { 00:16:41.618 "name": null, 00:16:41.618 "uuid": "aaf555e4-6c75-4f4f-a470-e8930c26f627", 00:16:41.618 "is_configured": false, 00:16:41.618 "data_offset": 0, 00:16:41.618 "data_size": 63488 00:16:41.618 }, 00:16:41.618 { 00:16:41.618 "name": "BaseBdev3", 00:16:41.618 "uuid": "8179b713-f689-4c51-bfd2-345b2dc6f8b5", 00:16:41.618 "is_configured": true, 00:16:41.618 "data_offset": 2048, 00:16:41.618 "data_size": 63488 00:16:41.618 }, 00:16:41.618 { 00:16:41.618 "name": "BaseBdev4", 00:16:41.618 "uuid": "4c3899b6-4bd8-4beb-8c8d-5050244f8721", 00:16:41.618 "is_configured": true, 00:16:41.618 "data_offset": 2048, 00:16:41.618 "data_size": 63488 00:16:41.618 } 00:16:41.618 ] 00:16:41.618 }' 00:16:41.618 12:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.618 12:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.185 12:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:42.185 12:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.185 12:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.185 12:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.185 12:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.185 12:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:42.185 12:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:42.185 12:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.185 12:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.185 [2024-11-25 12:15:38.146524] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:42.185 BaseBdev1 00:16:42.185 12:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.185 12:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:42.185 12:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:42.185 12:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:42.185 12:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:42.185 12:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:42.185 12:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:42.185 12:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:42.185 12:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.185 12:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.185 12:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.185 12:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:42.185 12:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.185 12:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.185 [ 00:16:42.185 { 00:16:42.185 "name": "BaseBdev1", 00:16:42.185 "aliases": [ 00:16:42.185 "feafbf08-e670-41fb-832a-37e24ba305af" 00:16:42.185 ], 00:16:42.185 "product_name": "Malloc disk", 00:16:42.185 "block_size": 512, 00:16:42.185 "num_blocks": 65536, 00:16:42.185 "uuid": "feafbf08-e670-41fb-832a-37e24ba305af", 00:16:42.185 "assigned_rate_limits": { 00:16:42.185 "rw_ios_per_sec": 0, 00:16:42.185 "rw_mbytes_per_sec": 0, 00:16:42.185 "r_mbytes_per_sec": 0, 00:16:42.185 "w_mbytes_per_sec": 0 00:16:42.185 }, 00:16:42.185 "claimed": true, 00:16:42.185 "claim_type": "exclusive_write", 00:16:42.185 "zoned": false, 00:16:42.185 "supported_io_types": { 00:16:42.185 "read": true, 00:16:42.185 "write": true, 00:16:42.185 "unmap": true, 00:16:42.185 "flush": true, 00:16:42.185 "reset": true, 00:16:42.185 "nvme_admin": false, 00:16:42.185 "nvme_io": false, 00:16:42.185 "nvme_io_md": false, 00:16:42.185 "write_zeroes": true, 00:16:42.185 "zcopy": true, 00:16:42.185 "get_zone_info": false, 00:16:42.185 "zone_management": false, 00:16:42.185 "zone_append": false, 00:16:42.185 "compare": false, 00:16:42.185 "compare_and_write": false, 00:16:42.186 "abort": true, 00:16:42.186 "seek_hole": false, 00:16:42.186 "seek_data": false, 00:16:42.186 "copy": true, 00:16:42.186 "nvme_iov_md": false 00:16:42.186 }, 00:16:42.186 "memory_domains": [ 00:16:42.186 { 00:16:42.186 "dma_device_id": "system", 00:16:42.186 "dma_device_type": 1 00:16:42.186 }, 00:16:42.186 { 00:16:42.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:42.186 "dma_device_type": 2 00:16:42.186 } 00:16:42.186 ], 00:16:42.186 "driver_specific": {} 00:16:42.186 } 00:16:42.186 ] 00:16:42.186 12:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.186 12:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:42.186 12:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:42.186 12:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:42.186 12:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:42.186 12:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:42.186 12:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:42.186 12:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:42.186 12:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.186 12:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.186 12:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.186 12:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.186 12:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.186 12:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.186 12:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:42.186 12:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.186 12:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.186 12:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.186 "name": "Existed_Raid", 00:16:42.186 "uuid": "c70eea82-7258-42ee-8dcf-9e37625dae2c", 00:16:42.186 "strip_size_kb": 0, 00:16:42.186 "state": "configuring", 00:16:42.186 "raid_level": "raid1", 00:16:42.186 "superblock": true, 00:16:42.186 "num_base_bdevs": 4, 00:16:42.186 "num_base_bdevs_discovered": 3, 00:16:42.186 "num_base_bdevs_operational": 4, 00:16:42.186 "base_bdevs_list": [ 00:16:42.186 { 00:16:42.186 "name": "BaseBdev1", 00:16:42.186 "uuid": "feafbf08-e670-41fb-832a-37e24ba305af", 00:16:42.186 "is_configured": true, 00:16:42.186 "data_offset": 2048, 00:16:42.186 "data_size": 63488 00:16:42.186 }, 00:16:42.186 { 00:16:42.186 "name": null, 00:16:42.186 "uuid": "aaf555e4-6c75-4f4f-a470-e8930c26f627", 00:16:42.186 "is_configured": false, 00:16:42.186 "data_offset": 0, 00:16:42.186 "data_size": 63488 00:16:42.186 }, 00:16:42.186 { 00:16:42.186 "name": "BaseBdev3", 00:16:42.186 "uuid": "8179b713-f689-4c51-bfd2-345b2dc6f8b5", 00:16:42.186 "is_configured": true, 00:16:42.186 "data_offset": 2048, 00:16:42.186 "data_size": 63488 00:16:42.186 }, 00:16:42.186 { 00:16:42.186 "name": "BaseBdev4", 00:16:42.186 "uuid": "4c3899b6-4bd8-4beb-8c8d-5050244f8721", 00:16:42.186 "is_configured": true, 00:16:42.186 "data_offset": 2048, 00:16:42.186 "data_size": 63488 00:16:42.186 } 00:16:42.186 ] 00:16:42.186 }' 00:16:42.186 12:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.186 12:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.754 12:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:42.754 12:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.754 12:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.754 12:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.754 12:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.754 12:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:42.754 12:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:42.754 12:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.754 12:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.754 [2024-11-25 12:15:38.742763] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:42.754 12:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.754 12:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:42.754 12:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:42.754 12:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:42.754 12:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:42.754 12:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:42.754 12:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:42.754 12:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.754 12:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.754 12:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.754 12:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.754 12:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.754 12:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.754 12:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:42.754 12:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.754 12:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.754 12:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.754 "name": "Existed_Raid", 00:16:42.754 "uuid": "c70eea82-7258-42ee-8dcf-9e37625dae2c", 00:16:42.754 "strip_size_kb": 0, 00:16:42.754 "state": "configuring", 00:16:42.754 "raid_level": "raid1", 00:16:42.754 "superblock": true, 00:16:42.754 "num_base_bdevs": 4, 00:16:42.754 "num_base_bdevs_discovered": 2, 00:16:42.754 "num_base_bdevs_operational": 4, 00:16:42.754 "base_bdevs_list": [ 00:16:42.754 { 00:16:42.754 "name": "BaseBdev1", 00:16:42.754 "uuid": "feafbf08-e670-41fb-832a-37e24ba305af", 00:16:42.754 "is_configured": true, 00:16:42.754 "data_offset": 2048, 00:16:42.754 "data_size": 63488 00:16:42.754 }, 00:16:42.754 { 00:16:42.754 "name": null, 00:16:42.754 "uuid": "aaf555e4-6c75-4f4f-a470-e8930c26f627", 00:16:42.754 "is_configured": false, 00:16:42.754 "data_offset": 0, 00:16:42.754 "data_size": 63488 00:16:42.754 }, 00:16:42.754 { 00:16:42.754 "name": null, 00:16:42.754 "uuid": "8179b713-f689-4c51-bfd2-345b2dc6f8b5", 00:16:42.754 "is_configured": false, 00:16:42.754 "data_offset": 0, 00:16:42.754 "data_size": 63488 00:16:42.754 }, 00:16:42.754 { 00:16:42.754 "name": "BaseBdev4", 00:16:42.754 "uuid": "4c3899b6-4bd8-4beb-8c8d-5050244f8721", 00:16:42.754 "is_configured": true, 00:16:42.754 "data_offset": 2048, 00:16:42.754 "data_size": 63488 00:16:42.754 } 00:16:42.754 ] 00:16:42.754 }' 00:16:42.754 12:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.754 12:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.324 12:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.324 12:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.324 12:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.324 12:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:43.324 12:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.324 12:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:43.324 12:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:43.324 12:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.324 12:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.324 [2024-11-25 12:15:39.310920] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:43.324 12:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.324 12:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:43.324 12:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:43.324 12:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:43.324 12:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:43.324 12:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:43.324 12:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:43.324 12:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.324 12:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.324 12:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.324 12:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.324 12:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.324 12:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.324 12:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:43.324 12:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.324 12:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.324 12:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.324 "name": "Existed_Raid", 00:16:43.324 "uuid": "c70eea82-7258-42ee-8dcf-9e37625dae2c", 00:16:43.324 "strip_size_kb": 0, 00:16:43.324 "state": "configuring", 00:16:43.324 "raid_level": "raid1", 00:16:43.324 "superblock": true, 00:16:43.324 "num_base_bdevs": 4, 00:16:43.324 "num_base_bdevs_discovered": 3, 00:16:43.324 "num_base_bdevs_operational": 4, 00:16:43.324 "base_bdevs_list": [ 00:16:43.324 { 00:16:43.324 "name": "BaseBdev1", 00:16:43.324 "uuid": "feafbf08-e670-41fb-832a-37e24ba305af", 00:16:43.324 "is_configured": true, 00:16:43.325 "data_offset": 2048, 00:16:43.325 "data_size": 63488 00:16:43.325 }, 00:16:43.325 { 00:16:43.325 "name": null, 00:16:43.325 "uuid": "aaf555e4-6c75-4f4f-a470-e8930c26f627", 00:16:43.325 "is_configured": false, 00:16:43.325 "data_offset": 0, 00:16:43.325 "data_size": 63488 00:16:43.325 }, 00:16:43.325 { 00:16:43.325 "name": "BaseBdev3", 00:16:43.325 "uuid": "8179b713-f689-4c51-bfd2-345b2dc6f8b5", 00:16:43.325 "is_configured": true, 00:16:43.325 "data_offset": 2048, 00:16:43.325 "data_size": 63488 00:16:43.325 }, 00:16:43.325 { 00:16:43.325 "name": "BaseBdev4", 00:16:43.325 "uuid": "4c3899b6-4bd8-4beb-8c8d-5050244f8721", 00:16:43.325 "is_configured": true, 00:16:43.325 "data_offset": 2048, 00:16:43.325 "data_size": 63488 00:16:43.325 } 00:16:43.325 ] 00:16:43.325 }' 00:16:43.325 12:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.325 12:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.938 12:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:43.938 12:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.938 12:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.938 12:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.938 12:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.938 12:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:43.938 12:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:43.938 12:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.938 12:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.938 [2024-11-25 12:15:39.879151] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:43.938 12:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.938 12:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:43.938 12:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:43.938 12:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:43.938 12:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:43.938 12:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:43.938 12:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:43.938 12:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.938 12:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.938 12:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.938 12:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.938 12:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.938 12:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:43.938 12:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.938 12:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.938 12:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.938 12:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.938 "name": "Existed_Raid", 00:16:43.938 "uuid": "c70eea82-7258-42ee-8dcf-9e37625dae2c", 00:16:43.938 "strip_size_kb": 0, 00:16:43.938 "state": "configuring", 00:16:43.938 "raid_level": "raid1", 00:16:43.938 "superblock": true, 00:16:43.938 "num_base_bdevs": 4, 00:16:43.938 "num_base_bdevs_discovered": 2, 00:16:43.938 "num_base_bdevs_operational": 4, 00:16:43.938 "base_bdevs_list": [ 00:16:43.938 { 00:16:43.938 "name": null, 00:16:43.938 "uuid": "feafbf08-e670-41fb-832a-37e24ba305af", 00:16:43.938 "is_configured": false, 00:16:43.938 "data_offset": 0, 00:16:43.938 "data_size": 63488 00:16:43.938 }, 00:16:43.938 { 00:16:43.938 "name": null, 00:16:43.938 "uuid": "aaf555e4-6c75-4f4f-a470-e8930c26f627", 00:16:43.938 "is_configured": false, 00:16:43.939 "data_offset": 0, 00:16:43.939 "data_size": 63488 00:16:43.939 }, 00:16:43.939 { 00:16:43.939 "name": "BaseBdev3", 00:16:43.939 "uuid": "8179b713-f689-4c51-bfd2-345b2dc6f8b5", 00:16:43.939 "is_configured": true, 00:16:43.939 "data_offset": 2048, 00:16:43.939 "data_size": 63488 00:16:43.939 }, 00:16:43.939 { 00:16:43.939 "name": "BaseBdev4", 00:16:43.939 "uuid": "4c3899b6-4bd8-4beb-8c8d-5050244f8721", 00:16:43.939 "is_configured": true, 00:16:43.939 "data_offset": 2048, 00:16:43.939 "data_size": 63488 00:16:43.939 } 00:16:43.939 ] 00:16:43.939 }' 00:16:43.939 12:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.939 12:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.506 12:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:44.506 12:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.506 12:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.506 12:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.506 12:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.506 12:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:44.506 12:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:44.506 12:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.506 12:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.506 [2024-11-25 12:15:40.537305] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:44.506 12:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.506 12:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:44.506 12:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:44.506 12:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:44.506 12:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:44.506 12:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:44.506 12:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:44.506 12:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.506 12:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.506 12:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.506 12:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.506 12:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.506 12:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:44.506 12:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.506 12:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.506 12:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.765 12:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.765 "name": "Existed_Raid", 00:16:44.765 "uuid": "c70eea82-7258-42ee-8dcf-9e37625dae2c", 00:16:44.765 "strip_size_kb": 0, 00:16:44.765 "state": "configuring", 00:16:44.765 "raid_level": "raid1", 00:16:44.765 "superblock": true, 00:16:44.765 "num_base_bdevs": 4, 00:16:44.765 "num_base_bdevs_discovered": 3, 00:16:44.765 "num_base_bdevs_operational": 4, 00:16:44.765 "base_bdevs_list": [ 00:16:44.765 { 00:16:44.765 "name": null, 00:16:44.765 "uuid": "feafbf08-e670-41fb-832a-37e24ba305af", 00:16:44.765 "is_configured": false, 00:16:44.765 "data_offset": 0, 00:16:44.765 "data_size": 63488 00:16:44.765 }, 00:16:44.765 { 00:16:44.765 "name": "BaseBdev2", 00:16:44.765 "uuid": "aaf555e4-6c75-4f4f-a470-e8930c26f627", 00:16:44.765 "is_configured": true, 00:16:44.765 "data_offset": 2048, 00:16:44.765 "data_size": 63488 00:16:44.765 }, 00:16:44.765 { 00:16:44.765 "name": "BaseBdev3", 00:16:44.765 "uuid": "8179b713-f689-4c51-bfd2-345b2dc6f8b5", 00:16:44.765 "is_configured": true, 00:16:44.765 "data_offset": 2048, 00:16:44.765 "data_size": 63488 00:16:44.765 }, 00:16:44.765 { 00:16:44.765 "name": "BaseBdev4", 00:16:44.765 "uuid": "4c3899b6-4bd8-4beb-8c8d-5050244f8721", 00:16:44.765 "is_configured": true, 00:16:44.765 "data_offset": 2048, 00:16:44.766 "data_size": 63488 00:16:44.766 } 00:16:44.766 ] 00:16:44.766 }' 00:16:44.766 12:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.766 12:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.024 12:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.024 12:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:45.024 12:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.024 12:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.024 12:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.024 12:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:45.024 12:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.024 12:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.024 12:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:45.024 12:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.284 12:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.284 12:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u feafbf08-e670-41fb-832a-37e24ba305af 00:16:45.284 12:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.284 12:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.284 [2024-11-25 12:15:41.195175] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:45.284 [2024-11-25 12:15:41.195667] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:45.284 [2024-11-25 12:15:41.195700] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:45.284 NewBaseBdev 00:16:45.284 [2024-11-25 12:15:41.196023] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:45.284 [2024-11-25 12:15:41.196244] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:45.284 [2024-11-25 12:15:41.196268] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:45.284 12:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.284 [2024-11-25 12:15:41.196485] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:45.284 12:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:45.284 12:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:45.284 12:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:45.284 12:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:45.284 12:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:45.284 12:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:45.284 12:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:45.284 12:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.284 12:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.284 12:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.284 12:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:45.284 12:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.284 12:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.284 [ 00:16:45.284 { 00:16:45.284 "name": "NewBaseBdev", 00:16:45.284 "aliases": [ 00:16:45.284 "feafbf08-e670-41fb-832a-37e24ba305af" 00:16:45.284 ], 00:16:45.284 "product_name": "Malloc disk", 00:16:45.284 "block_size": 512, 00:16:45.284 "num_blocks": 65536, 00:16:45.284 "uuid": "feafbf08-e670-41fb-832a-37e24ba305af", 00:16:45.284 "assigned_rate_limits": { 00:16:45.284 "rw_ios_per_sec": 0, 00:16:45.284 "rw_mbytes_per_sec": 0, 00:16:45.284 "r_mbytes_per_sec": 0, 00:16:45.284 "w_mbytes_per_sec": 0 00:16:45.284 }, 00:16:45.284 "claimed": true, 00:16:45.284 "claim_type": "exclusive_write", 00:16:45.284 "zoned": false, 00:16:45.284 "supported_io_types": { 00:16:45.284 "read": true, 00:16:45.284 "write": true, 00:16:45.284 "unmap": true, 00:16:45.284 "flush": true, 00:16:45.284 "reset": true, 00:16:45.284 "nvme_admin": false, 00:16:45.284 "nvme_io": false, 00:16:45.284 "nvme_io_md": false, 00:16:45.284 "write_zeroes": true, 00:16:45.284 "zcopy": true, 00:16:45.284 "get_zone_info": false, 00:16:45.284 "zone_management": false, 00:16:45.284 "zone_append": false, 00:16:45.284 "compare": false, 00:16:45.284 "compare_and_write": false, 00:16:45.284 "abort": true, 00:16:45.284 "seek_hole": false, 00:16:45.284 "seek_data": false, 00:16:45.284 "copy": true, 00:16:45.284 "nvme_iov_md": false 00:16:45.284 }, 00:16:45.284 "memory_domains": [ 00:16:45.284 { 00:16:45.284 "dma_device_id": "system", 00:16:45.284 "dma_device_type": 1 00:16:45.284 }, 00:16:45.284 { 00:16:45.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:45.284 "dma_device_type": 2 00:16:45.284 } 00:16:45.284 ], 00:16:45.284 "driver_specific": {} 00:16:45.284 } 00:16:45.284 ] 00:16:45.284 12:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.284 12:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:45.284 12:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:16:45.284 12:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:45.284 12:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:45.284 12:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:45.284 12:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:45.284 12:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:45.284 12:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.284 12:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.284 12:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.284 12:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.284 12:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.285 12:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.285 12:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:45.285 12:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.285 12:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.285 12:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.285 "name": "Existed_Raid", 00:16:45.285 "uuid": "c70eea82-7258-42ee-8dcf-9e37625dae2c", 00:16:45.285 "strip_size_kb": 0, 00:16:45.285 "state": "online", 00:16:45.285 "raid_level": "raid1", 00:16:45.285 "superblock": true, 00:16:45.285 "num_base_bdevs": 4, 00:16:45.285 "num_base_bdevs_discovered": 4, 00:16:45.285 "num_base_bdevs_operational": 4, 00:16:45.285 "base_bdevs_list": [ 00:16:45.285 { 00:16:45.285 "name": "NewBaseBdev", 00:16:45.285 "uuid": "feafbf08-e670-41fb-832a-37e24ba305af", 00:16:45.285 "is_configured": true, 00:16:45.285 "data_offset": 2048, 00:16:45.285 "data_size": 63488 00:16:45.285 }, 00:16:45.285 { 00:16:45.285 "name": "BaseBdev2", 00:16:45.285 "uuid": "aaf555e4-6c75-4f4f-a470-e8930c26f627", 00:16:45.285 "is_configured": true, 00:16:45.285 "data_offset": 2048, 00:16:45.285 "data_size": 63488 00:16:45.285 }, 00:16:45.285 { 00:16:45.285 "name": "BaseBdev3", 00:16:45.285 "uuid": "8179b713-f689-4c51-bfd2-345b2dc6f8b5", 00:16:45.285 "is_configured": true, 00:16:45.285 "data_offset": 2048, 00:16:45.285 "data_size": 63488 00:16:45.285 }, 00:16:45.285 { 00:16:45.285 "name": "BaseBdev4", 00:16:45.285 "uuid": "4c3899b6-4bd8-4beb-8c8d-5050244f8721", 00:16:45.285 "is_configured": true, 00:16:45.285 "data_offset": 2048, 00:16:45.285 "data_size": 63488 00:16:45.285 } 00:16:45.285 ] 00:16:45.285 }' 00:16:45.285 12:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.285 12:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.869 12:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:45.869 12:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:45.869 12:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:45.869 12:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:45.869 12:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:45.870 12:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:45.870 12:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:45.870 12:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.870 12:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:45.870 12:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.870 [2024-11-25 12:15:41.747816] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:45.870 12:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.870 12:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:45.870 "name": "Existed_Raid", 00:16:45.870 "aliases": [ 00:16:45.870 "c70eea82-7258-42ee-8dcf-9e37625dae2c" 00:16:45.870 ], 00:16:45.870 "product_name": "Raid Volume", 00:16:45.870 "block_size": 512, 00:16:45.870 "num_blocks": 63488, 00:16:45.870 "uuid": "c70eea82-7258-42ee-8dcf-9e37625dae2c", 00:16:45.870 "assigned_rate_limits": { 00:16:45.870 "rw_ios_per_sec": 0, 00:16:45.870 "rw_mbytes_per_sec": 0, 00:16:45.870 "r_mbytes_per_sec": 0, 00:16:45.870 "w_mbytes_per_sec": 0 00:16:45.870 }, 00:16:45.870 "claimed": false, 00:16:45.870 "zoned": false, 00:16:45.870 "supported_io_types": { 00:16:45.870 "read": true, 00:16:45.870 "write": true, 00:16:45.870 "unmap": false, 00:16:45.870 "flush": false, 00:16:45.870 "reset": true, 00:16:45.870 "nvme_admin": false, 00:16:45.870 "nvme_io": false, 00:16:45.870 "nvme_io_md": false, 00:16:45.870 "write_zeroes": true, 00:16:45.870 "zcopy": false, 00:16:45.870 "get_zone_info": false, 00:16:45.870 "zone_management": false, 00:16:45.870 "zone_append": false, 00:16:45.870 "compare": false, 00:16:45.870 "compare_and_write": false, 00:16:45.870 "abort": false, 00:16:45.870 "seek_hole": false, 00:16:45.870 "seek_data": false, 00:16:45.870 "copy": false, 00:16:45.870 "nvme_iov_md": false 00:16:45.870 }, 00:16:45.870 "memory_domains": [ 00:16:45.870 { 00:16:45.870 "dma_device_id": "system", 00:16:45.870 "dma_device_type": 1 00:16:45.870 }, 00:16:45.870 { 00:16:45.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:45.870 "dma_device_type": 2 00:16:45.870 }, 00:16:45.870 { 00:16:45.870 "dma_device_id": "system", 00:16:45.870 "dma_device_type": 1 00:16:45.870 }, 00:16:45.870 { 00:16:45.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:45.870 "dma_device_type": 2 00:16:45.870 }, 00:16:45.870 { 00:16:45.870 "dma_device_id": "system", 00:16:45.870 "dma_device_type": 1 00:16:45.870 }, 00:16:45.870 { 00:16:45.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:45.870 "dma_device_type": 2 00:16:45.870 }, 00:16:45.870 { 00:16:45.870 "dma_device_id": "system", 00:16:45.870 "dma_device_type": 1 00:16:45.870 }, 00:16:45.870 { 00:16:45.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:45.870 "dma_device_type": 2 00:16:45.870 } 00:16:45.870 ], 00:16:45.870 "driver_specific": { 00:16:45.870 "raid": { 00:16:45.870 "uuid": "c70eea82-7258-42ee-8dcf-9e37625dae2c", 00:16:45.870 "strip_size_kb": 0, 00:16:45.870 "state": "online", 00:16:45.870 "raid_level": "raid1", 00:16:45.870 "superblock": true, 00:16:45.870 "num_base_bdevs": 4, 00:16:45.870 "num_base_bdevs_discovered": 4, 00:16:45.870 "num_base_bdevs_operational": 4, 00:16:45.870 "base_bdevs_list": [ 00:16:45.870 { 00:16:45.870 "name": "NewBaseBdev", 00:16:45.870 "uuid": "feafbf08-e670-41fb-832a-37e24ba305af", 00:16:45.870 "is_configured": true, 00:16:45.870 "data_offset": 2048, 00:16:45.870 "data_size": 63488 00:16:45.870 }, 00:16:45.870 { 00:16:45.870 "name": "BaseBdev2", 00:16:45.870 "uuid": "aaf555e4-6c75-4f4f-a470-e8930c26f627", 00:16:45.870 "is_configured": true, 00:16:45.870 "data_offset": 2048, 00:16:45.870 "data_size": 63488 00:16:45.870 }, 00:16:45.870 { 00:16:45.870 "name": "BaseBdev3", 00:16:45.870 "uuid": "8179b713-f689-4c51-bfd2-345b2dc6f8b5", 00:16:45.870 "is_configured": true, 00:16:45.870 "data_offset": 2048, 00:16:45.870 "data_size": 63488 00:16:45.870 }, 00:16:45.870 { 00:16:45.870 "name": "BaseBdev4", 00:16:45.870 "uuid": "4c3899b6-4bd8-4beb-8c8d-5050244f8721", 00:16:45.870 "is_configured": true, 00:16:45.870 "data_offset": 2048, 00:16:45.870 "data_size": 63488 00:16:45.870 } 00:16:45.870 ] 00:16:45.870 } 00:16:45.870 } 00:16:45.870 }' 00:16:45.870 12:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:45.870 12:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:45.870 BaseBdev2 00:16:45.870 BaseBdev3 00:16:45.870 BaseBdev4' 00:16:45.870 12:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:45.870 12:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:45.870 12:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:45.870 12:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:45.870 12:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:45.870 12:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.870 12:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.870 12:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.870 12:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:45.870 12:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:45.870 12:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:45.870 12:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:45.870 12:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:45.870 12:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.870 12:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.129 12:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.129 12:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:46.129 12:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:46.129 12:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:46.129 12:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:46.129 12:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:46.129 12:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.129 12:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.129 12:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.129 12:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:46.129 12:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:46.129 12:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:46.129 12:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:46.129 12:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.129 12:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.129 12:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:46.129 12:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.129 12:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:46.129 12:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:46.129 12:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:46.129 12:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.129 12:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.129 [2024-11-25 12:15:42.103455] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:46.129 [2024-11-25 12:15:42.103491] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:46.129 [2024-11-25 12:15:42.103581] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:46.129 [2024-11-25 12:15:42.103937] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:46.129 [2024-11-25 12:15:42.103959] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:46.129 12:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.129 12:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73996 00:16:46.129 12:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 73996 ']' 00:16:46.129 12:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 73996 00:16:46.130 12:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:46.130 12:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:46.130 12:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73996 00:16:46.130 killing process with pid 73996 00:16:46.130 12:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:46.130 12:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:46.130 12:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73996' 00:16:46.130 12:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 73996 00:16:46.130 [2024-11-25 12:15:42.143203] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:46.130 12:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 73996 00:16:46.697 [2024-11-25 12:15:42.491615] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:47.633 ************************************ 00:16:47.633 END TEST raid_state_function_test_sb 00:16:47.633 ************************************ 00:16:47.633 12:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:16:47.633 00:16:47.633 real 0m12.622s 00:16:47.633 user 0m21.000s 00:16:47.633 sys 0m1.747s 00:16:47.633 12:15:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:47.633 12:15:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.633 12:15:43 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:16:47.633 12:15:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:47.633 12:15:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:47.633 12:15:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:47.633 ************************************ 00:16:47.633 START TEST raid_superblock_test 00:16:47.633 ************************************ 00:16:47.633 12:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:16:47.633 12:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:16:47.633 12:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:16:47.633 12:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:47.633 12:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:47.633 12:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:47.633 12:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:47.633 12:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:47.633 12:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:47.633 12:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:47.633 12:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:47.633 12:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:47.633 12:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:47.633 12:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:47.633 12:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:16:47.633 12:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:16:47.633 12:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74678 00:16:47.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:47.633 12:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74678 00:16:47.633 12:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:47.633 12:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74678 ']' 00:16:47.633 12:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:47.633 12:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:47.633 12:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:47.633 12:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:47.633 12:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.633 [2024-11-25 12:15:43.670639] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:16:47.633 [2024-11-25 12:15:43.670843] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74678 ] 00:16:47.891 [2024-11-25 12:15:43.860950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:48.149 [2024-11-25 12:15:43.995560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:48.150 [2024-11-25 12:15:44.199085] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:48.150 [2024-11-25 12:15:44.199403] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:48.716 12:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:48.716 12:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:16:48.716 12:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:48.716 12:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:48.716 12:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:48.716 12:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:48.716 12:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:48.716 12:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:48.716 12:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:48.716 12:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:48.716 12:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:16:48.716 12:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.716 12:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.716 malloc1 00:16:48.716 12:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.716 12:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:48.716 12:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.716 12:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.716 [2024-11-25 12:15:44.712682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:48.716 [2024-11-25 12:15:44.712791] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:48.716 [2024-11-25 12:15:44.712826] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:48.716 [2024-11-25 12:15:44.712842] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:48.716 [2024-11-25 12:15:44.715814] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:48.716 [2024-11-25 12:15:44.716017] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:48.716 pt1 00:16:48.716 12:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.716 12:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:48.716 12:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:48.716 12:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:48.716 12:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:48.716 12:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:48.716 12:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:48.716 12:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:48.716 12:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:48.716 12:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:16:48.716 12:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.716 12:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.716 malloc2 00:16:48.716 12:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.716 12:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:48.716 12:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.716 12:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.716 [2024-11-25 12:15:44.765035] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:48.716 [2024-11-25 12:15:44.765106] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:48.716 [2024-11-25 12:15:44.765138] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:48.716 [2024-11-25 12:15:44.765153] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:48.716 [2024-11-25 12:15:44.767966] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:48.716 [2024-11-25 12:15:44.768011] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:48.716 pt2 00:16:48.716 12:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.716 12:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:48.716 12:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:48.716 12:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:16:48.716 12:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:16:48.716 12:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:48.716 12:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:48.716 12:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:48.716 12:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:48.716 12:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:16:48.716 12:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.716 12:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.975 malloc3 00:16:48.975 12:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.975 12:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:48.975 12:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.975 12:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.975 [2024-11-25 12:15:44.836655] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:48.975 [2024-11-25 12:15:44.836877] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:48.975 [2024-11-25 12:15:44.836990] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:48.975 [2024-11-25 12:15:44.837185] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:48.975 [2024-11-25 12:15:44.839982] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:48.975 [2024-11-25 12:15:44.840133] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:48.975 pt3 00:16:48.975 12:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.975 12:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:48.975 12:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:48.975 12:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:16:48.975 12:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:16:48.975 12:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:16:48.975 12:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:48.975 12:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:48.975 12:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:48.975 12:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:16:48.975 12:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.975 12:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.975 malloc4 00:16:48.975 12:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.975 12:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:48.975 12:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.975 12:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.975 [2024-11-25 12:15:44.892527] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:48.975 [2024-11-25 12:15:44.892723] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:48.975 [2024-11-25 12:15:44.892801] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:48.975 [2024-11-25 12:15:44.892914] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:48.975 [2024-11-25 12:15:44.895761] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:48.975 [2024-11-25 12:15:44.895917] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:48.975 pt4 00:16:48.975 12:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.975 12:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:48.975 12:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:48.975 12:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:16:48.975 12:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.975 12:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.975 [2024-11-25 12:15:44.900703] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:48.975 [2024-11-25 12:15:44.903198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:48.975 [2024-11-25 12:15:44.903434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:48.975 [2024-11-25 12:15:44.903555] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:48.975 [2024-11-25 12:15:44.903917] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:48.975 [2024-11-25 12:15:44.904083] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:48.975 [2024-11-25 12:15:44.904489] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:48.975 [2024-11-25 12:15:44.904868] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:48.976 [2024-11-25 12:15:44.905014] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:48.976 [2024-11-25 12:15:44.905396] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:48.976 12:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.976 12:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:48.976 12:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:48.976 12:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:48.976 12:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:48.976 12:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:48.976 12:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:48.976 12:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.976 12:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.976 12:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.976 12:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.976 12:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.976 12:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.976 12:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.976 12:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.976 12:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.976 12:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.976 "name": "raid_bdev1", 00:16:48.976 "uuid": "b3b3a278-618c-4e9f-9f87-7cb25dad4595", 00:16:48.976 "strip_size_kb": 0, 00:16:48.976 "state": "online", 00:16:48.976 "raid_level": "raid1", 00:16:48.976 "superblock": true, 00:16:48.976 "num_base_bdevs": 4, 00:16:48.976 "num_base_bdevs_discovered": 4, 00:16:48.976 "num_base_bdevs_operational": 4, 00:16:48.976 "base_bdevs_list": [ 00:16:48.976 { 00:16:48.976 "name": "pt1", 00:16:48.976 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:48.976 "is_configured": true, 00:16:48.976 "data_offset": 2048, 00:16:48.976 "data_size": 63488 00:16:48.976 }, 00:16:48.976 { 00:16:48.976 "name": "pt2", 00:16:48.976 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:48.976 "is_configured": true, 00:16:48.976 "data_offset": 2048, 00:16:48.976 "data_size": 63488 00:16:48.976 }, 00:16:48.976 { 00:16:48.976 "name": "pt3", 00:16:48.976 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:48.976 "is_configured": true, 00:16:48.976 "data_offset": 2048, 00:16:48.976 "data_size": 63488 00:16:48.976 }, 00:16:48.976 { 00:16:48.976 "name": "pt4", 00:16:48.976 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:48.976 "is_configured": true, 00:16:48.976 "data_offset": 2048, 00:16:48.976 "data_size": 63488 00:16:48.976 } 00:16:48.976 ] 00:16:48.976 }' 00:16:48.976 12:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.976 12:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.542 12:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:49.542 12:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:49.542 12:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:49.542 12:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:49.542 12:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:49.542 12:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:49.542 12:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:49.542 12:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:49.542 12:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.542 12:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.543 [2024-11-25 12:15:45.425900] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:49.543 12:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.543 12:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:49.543 "name": "raid_bdev1", 00:16:49.543 "aliases": [ 00:16:49.543 "b3b3a278-618c-4e9f-9f87-7cb25dad4595" 00:16:49.543 ], 00:16:49.543 "product_name": "Raid Volume", 00:16:49.543 "block_size": 512, 00:16:49.543 "num_blocks": 63488, 00:16:49.543 "uuid": "b3b3a278-618c-4e9f-9f87-7cb25dad4595", 00:16:49.543 "assigned_rate_limits": { 00:16:49.543 "rw_ios_per_sec": 0, 00:16:49.543 "rw_mbytes_per_sec": 0, 00:16:49.543 "r_mbytes_per_sec": 0, 00:16:49.543 "w_mbytes_per_sec": 0 00:16:49.543 }, 00:16:49.543 "claimed": false, 00:16:49.543 "zoned": false, 00:16:49.543 "supported_io_types": { 00:16:49.543 "read": true, 00:16:49.543 "write": true, 00:16:49.543 "unmap": false, 00:16:49.543 "flush": false, 00:16:49.543 "reset": true, 00:16:49.543 "nvme_admin": false, 00:16:49.543 "nvme_io": false, 00:16:49.543 "nvme_io_md": false, 00:16:49.543 "write_zeroes": true, 00:16:49.543 "zcopy": false, 00:16:49.543 "get_zone_info": false, 00:16:49.543 "zone_management": false, 00:16:49.543 "zone_append": false, 00:16:49.543 "compare": false, 00:16:49.543 "compare_and_write": false, 00:16:49.543 "abort": false, 00:16:49.543 "seek_hole": false, 00:16:49.543 "seek_data": false, 00:16:49.543 "copy": false, 00:16:49.543 "nvme_iov_md": false 00:16:49.543 }, 00:16:49.543 "memory_domains": [ 00:16:49.543 { 00:16:49.543 "dma_device_id": "system", 00:16:49.543 "dma_device_type": 1 00:16:49.543 }, 00:16:49.543 { 00:16:49.543 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:49.543 "dma_device_type": 2 00:16:49.543 }, 00:16:49.543 { 00:16:49.543 "dma_device_id": "system", 00:16:49.543 "dma_device_type": 1 00:16:49.543 }, 00:16:49.543 { 00:16:49.543 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:49.543 "dma_device_type": 2 00:16:49.543 }, 00:16:49.543 { 00:16:49.543 "dma_device_id": "system", 00:16:49.543 "dma_device_type": 1 00:16:49.543 }, 00:16:49.543 { 00:16:49.543 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:49.543 "dma_device_type": 2 00:16:49.543 }, 00:16:49.543 { 00:16:49.543 "dma_device_id": "system", 00:16:49.543 "dma_device_type": 1 00:16:49.543 }, 00:16:49.543 { 00:16:49.543 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:49.543 "dma_device_type": 2 00:16:49.543 } 00:16:49.543 ], 00:16:49.543 "driver_specific": { 00:16:49.543 "raid": { 00:16:49.543 "uuid": "b3b3a278-618c-4e9f-9f87-7cb25dad4595", 00:16:49.543 "strip_size_kb": 0, 00:16:49.543 "state": "online", 00:16:49.543 "raid_level": "raid1", 00:16:49.543 "superblock": true, 00:16:49.543 "num_base_bdevs": 4, 00:16:49.543 "num_base_bdevs_discovered": 4, 00:16:49.543 "num_base_bdevs_operational": 4, 00:16:49.543 "base_bdevs_list": [ 00:16:49.543 { 00:16:49.543 "name": "pt1", 00:16:49.543 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:49.543 "is_configured": true, 00:16:49.543 "data_offset": 2048, 00:16:49.543 "data_size": 63488 00:16:49.543 }, 00:16:49.543 { 00:16:49.543 "name": "pt2", 00:16:49.543 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:49.543 "is_configured": true, 00:16:49.543 "data_offset": 2048, 00:16:49.543 "data_size": 63488 00:16:49.543 }, 00:16:49.543 { 00:16:49.543 "name": "pt3", 00:16:49.543 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:49.543 "is_configured": true, 00:16:49.543 "data_offset": 2048, 00:16:49.543 "data_size": 63488 00:16:49.543 }, 00:16:49.543 { 00:16:49.543 "name": "pt4", 00:16:49.543 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:49.543 "is_configured": true, 00:16:49.543 "data_offset": 2048, 00:16:49.543 "data_size": 63488 00:16:49.543 } 00:16:49.543 ] 00:16:49.543 } 00:16:49.543 } 00:16:49.543 }' 00:16:49.543 12:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:49.543 12:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:49.543 pt2 00:16:49.543 pt3 00:16:49.543 pt4' 00:16:49.543 12:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:49.543 12:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:49.543 12:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:49.543 12:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:49.543 12:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.543 12:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.543 12:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:49.543 12:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.801 12:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:49.801 12:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:49.801 12:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:49.801 12:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:49.801 12:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:49.801 12:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.801 12:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.801 12:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.801 12:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:49.801 12:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:49.801 12:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:49.802 12:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:49.802 12:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:49.802 12:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.802 12:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.802 12:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.802 12:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:49.802 12:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:49.802 12:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:49.802 12:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:49.802 12:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.802 12:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.802 12:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:49.802 12:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.802 12:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:49.802 12:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:49.802 12:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:49.802 12:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.802 12:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:49.802 12:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.802 [2024-11-25 12:15:45.813970] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:49.802 12:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.802 12:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b3b3a278-618c-4e9f-9f87-7cb25dad4595 00:16:49.802 12:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z b3b3a278-618c-4e9f-9f87-7cb25dad4595 ']' 00:16:49.802 12:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:49.802 12:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.802 12:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.802 [2024-11-25 12:15:45.861570] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:49.802 [2024-11-25 12:15:45.861711] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:49.802 [2024-11-25 12:15:45.861931] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:49.802 [2024-11-25 12:15:45.862061] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:49.802 [2024-11-25 12:15:45.862099] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:49.802 12:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.802 12:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.802 12:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.802 12:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.802 12:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:49.802 12:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.060 12:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:50.060 12:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:50.060 12:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:50.060 12:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:50.060 12:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.060 12:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.060 12:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.060 12:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:50.060 12:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:50.061 12:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.061 12:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.061 12:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.061 12:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:50.061 12:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:16:50.061 12:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.061 12:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.061 12:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.061 12:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:50.061 12:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:16:50.061 12:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.061 12:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.061 12:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.061 12:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:50.061 12:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.061 12:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.061 12:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:50.061 12:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.061 12:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:50.061 12:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:50.061 12:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:16:50.061 12:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:50.061 12:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:50.061 12:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:50.061 12:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:50.061 12:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:50.061 12:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:50.061 12:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.061 12:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.061 [2024-11-25 12:15:46.013628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:50.061 [2024-11-25 12:15:46.016043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:50.061 [2024-11-25 12:15:46.016120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:50.061 [2024-11-25 12:15:46.016176] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:16:50.061 [2024-11-25 12:15:46.016250] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:50.061 [2024-11-25 12:15:46.016325] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:50.061 [2024-11-25 12:15:46.016378] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:50.061 [2024-11-25 12:15:46.016413] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:16:50.061 [2024-11-25 12:15:46.016437] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:50.061 [2024-11-25 12:15:46.016454] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:50.061 request: 00:16:50.061 { 00:16:50.061 "name": "raid_bdev1", 00:16:50.061 "raid_level": "raid1", 00:16:50.061 "base_bdevs": [ 00:16:50.061 "malloc1", 00:16:50.061 "malloc2", 00:16:50.061 "malloc3", 00:16:50.061 "malloc4" 00:16:50.061 ], 00:16:50.061 "superblock": false, 00:16:50.061 "method": "bdev_raid_create", 00:16:50.061 "req_id": 1 00:16:50.061 } 00:16:50.061 Got JSON-RPC error response 00:16:50.061 response: 00:16:50.061 { 00:16:50.061 "code": -17, 00:16:50.061 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:50.061 } 00:16:50.061 12:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:50.061 12:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:16:50.061 12:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:50.061 12:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:50.061 12:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:50.061 12:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.061 12:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.061 12:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:50.061 12:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.061 12:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.061 12:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:50.061 12:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:50.061 12:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:50.061 12:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.061 12:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.061 [2024-11-25 12:15:46.077622] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:50.061 [2024-11-25 12:15:46.077818] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:50.061 [2024-11-25 12:15:46.077853] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:50.061 [2024-11-25 12:15:46.077872] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:50.061 [2024-11-25 12:15:46.080705] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:50.061 [2024-11-25 12:15:46.080761] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:50.061 [2024-11-25 12:15:46.080851] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:50.061 [2024-11-25 12:15:46.080923] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:50.061 pt1 00:16:50.061 12:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.061 12:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:16:50.061 12:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:50.061 12:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:50.061 12:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:50.061 12:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:50.061 12:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:50.061 12:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.061 12:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.061 12:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.061 12:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.061 12:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.061 12:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.061 12:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.061 12:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.061 12:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.061 12:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.061 "name": "raid_bdev1", 00:16:50.061 "uuid": "b3b3a278-618c-4e9f-9f87-7cb25dad4595", 00:16:50.061 "strip_size_kb": 0, 00:16:50.061 "state": "configuring", 00:16:50.061 "raid_level": "raid1", 00:16:50.061 "superblock": true, 00:16:50.061 "num_base_bdevs": 4, 00:16:50.061 "num_base_bdevs_discovered": 1, 00:16:50.061 "num_base_bdevs_operational": 4, 00:16:50.061 "base_bdevs_list": [ 00:16:50.061 { 00:16:50.061 "name": "pt1", 00:16:50.061 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:50.061 "is_configured": true, 00:16:50.061 "data_offset": 2048, 00:16:50.061 "data_size": 63488 00:16:50.061 }, 00:16:50.061 { 00:16:50.061 "name": null, 00:16:50.061 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:50.061 "is_configured": false, 00:16:50.061 "data_offset": 2048, 00:16:50.061 "data_size": 63488 00:16:50.061 }, 00:16:50.061 { 00:16:50.061 "name": null, 00:16:50.061 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:50.061 "is_configured": false, 00:16:50.061 "data_offset": 2048, 00:16:50.061 "data_size": 63488 00:16:50.061 }, 00:16:50.061 { 00:16:50.061 "name": null, 00:16:50.061 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:50.061 "is_configured": false, 00:16:50.061 "data_offset": 2048, 00:16:50.061 "data_size": 63488 00:16:50.061 } 00:16:50.061 ] 00:16:50.061 }' 00:16:50.061 12:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.061 12:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.628 12:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:16:50.628 12:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:50.628 12:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.628 12:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.628 [2024-11-25 12:15:46.573824] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:50.628 [2024-11-25 12:15:46.573930] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:50.628 [2024-11-25 12:15:46.573963] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:50.628 [2024-11-25 12:15:46.573981] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:50.628 [2024-11-25 12:15:46.574577] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:50.628 [2024-11-25 12:15:46.574619] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:50.628 [2024-11-25 12:15:46.574724] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:50.628 [2024-11-25 12:15:46.574786] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:50.628 pt2 00:16:50.628 12:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.628 12:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:16:50.628 12:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.628 12:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.628 [2024-11-25 12:15:46.581832] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:50.628 12:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.628 12:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:16:50.628 12:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:50.628 12:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:50.628 12:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:50.628 12:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:50.628 12:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:50.628 12:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.628 12:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.628 12:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.628 12:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.628 12:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.628 12:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.628 12:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.628 12:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.628 12:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.628 12:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.628 "name": "raid_bdev1", 00:16:50.628 "uuid": "b3b3a278-618c-4e9f-9f87-7cb25dad4595", 00:16:50.628 "strip_size_kb": 0, 00:16:50.628 "state": "configuring", 00:16:50.628 "raid_level": "raid1", 00:16:50.628 "superblock": true, 00:16:50.628 "num_base_bdevs": 4, 00:16:50.628 "num_base_bdevs_discovered": 1, 00:16:50.628 "num_base_bdevs_operational": 4, 00:16:50.628 "base_bdevs_list": [ 00:16:50.628 { 00:16:50.628 "name": "pt1", 00:16:50.628 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:50.628 "is_configured": true, 00:16:50.628 "data_offset": 2048, 00:16:50.628 "data_size": 63488 00:16:50.628 }, 00:16:50.628 { 00:16:50.628 "name": null, 00:16:50.628 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:50.628 "is_configured": false, 00:16:50.628 "data_offset": 0, 00:16:50.628 "data_size": 63488 00:16:50.628 }, 00:16:50.628 { 00:16:50.628 "name": null, 00:16:50.628 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:50.628 "is_configured": false, 00:16:50.628 "data_offset": 2048, 00:16:50.628 "data_size": 63488 00:16:50.628 }, 00:16:50.628 { 00:16:50.628 "name": null, 00:16:50.628 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:50.628 "is_configured": false, 00:16:50.628 "data_offset": 2048, 00:16:50.628 "data_size": 63488 00:16:50.628 } 00:16:50.628 ] 00:16:50.628 }' 00:16:50.628 12:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.628 12:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.194 12:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:51.194 12:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:51.194 12:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:51.194 12:15:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.194 12:15:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.194 [2024-11-25 12:15:47.101942] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:51.194 [2024-11-25 12:15:47.102035] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:51.194 [2024-11-25 12:15:47.102069] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:51.194 [2024-11-25 12:15:47.102094] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:51.194 [2024-11-25 12:15:47.102712] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:51.194 [2024-11-25 12:15:47.102745] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:51.194 [2024-11-25 12:15:47.102871] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:51.194 [2024-11-25 12:15:47.102904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:51.194 pt2 00:16:51.194 12:15:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.194 12:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:51.194 12:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:51.194 12:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:51.194 12:15:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.194 12:15:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.194 [2024-11-25 12:15:47.109894] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:51.194 [2024-11-25 12:15:47.110130] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:51.194 [2024-11-25 12:15:47.110169] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:51.194 [2024-11-25 12:15:47.110185] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:51.194 [2024-11-25 12:15:47.110685] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:51.194 [2024-11-25 12:15:47.110736] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:51.194 [2024-11-25 12:15:47.110835] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:51.194 [2024-11-25 12:15:47.110864] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:51.194 pt3 00:16:51.194 12:15:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.194 12:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:51.194 12:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:51.194 12:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:51.194 12:15:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.194 12:15:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.194 [2024-11-25 12:15:47.117890] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:51.194 [2024-11-25 12:15:47.117956] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:51.194 [2024-11-25 12:15:47.117983] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:51.194 [2024-11-25 12:15:47.117998] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:51.194 [2024-11-25 12:15:47.118489] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:51.194 [2024-11-25 12:15:47.118530] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:51.194 [2024-11-25 12:15:47.118613] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:51.194 [2024-11-25 12:15:47.118641] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:51.194 [2024-11-25 12:15:47.118823] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:51.194 [2024-11-25 12:15:47.118845] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:51.194 [2024-11-25 12:15:47.119165] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:51.194 [2024-11-25 12:15:47.119388] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:51.194 [2024-11-25 12:15:47.119410] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:51.194 [2024-11-25 12:15:47.119575] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:51.194 pt4 00:16:51.194 12:15:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.194 12:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:51.194 12:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:51.194 12:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:51.194 12:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:51.194 12:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:51.194 12:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:51.194 12:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:51.194 12:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:51.194 12:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.194 12:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.194 12:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.194 12:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.194 12:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.194 12:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.194 12:15:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.194 12:15:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.194 12:15:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.194 12:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.194 "name": "raid_bdev1", 00:16:51.194 "uuid": "b3b3a278-618c-4e9f-9f87-7cb25dad4595", 00:16:51.195 "strip_size_kb": 0, 00:16:51.195 "state": "online", 00:16:51.195 "raid_level": "raid1", 00:16:51.195 "superblock": true, 00:16:51.195 "num_base_bdevs": 4, 00:16:51.195 "num_base_bdevs_discovered": 4, 00:16:51.195 "num_base_bdevs_operational": 4, 00:16:51.195 "base_bdevs_list": [ 00:16:51.195 { 00:16:51.195 "name": "pt1", 00:16:51.195 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:51.195 "is_configured": true, 00:16:51.195 "data_offset": 2048, 00:16:51.195 "data_size": 63488 00:16:51.195 }, 00:16:51.195 { 00:16:51.195 "name": "pt2", 00:16:51.195 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:51.195 "is_configured": true, 00:16:51.195 "data_offset": 2048, 00:16:51.195 "data_size": 63488 00:16:51.195 }, 00:16:51.195 { 00:16:51.195 "name": "pt3", 00:16:51.195 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:51.195 "is_configured": true, 00:16:51.195 "data_offset": 2048, 00:16:51.195 "data_size": 63488 00:16:51.195 }, 00:16:51.195 { 00:16:51.195 "name": "pt4", 00:16:51.195 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:51.195 "is_configured": true, 00:16:51.195 "data_offset": 2048, 00:16:51.195 "data_size": 63488 00:16:51.195 } 00:16:51.195 ] 00:16:51.195 }' 00:16:51.195 12:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.195 12:15:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.760 12:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:51.760 12:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:51.760 12:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:51.760 12:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:51.760 12:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:51.760 12:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:51.760 12:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:51.760 12:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:51.760 12:15:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.760 12:15:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.760 [2024-11-25 12:15:47.622497] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:51.760 12:15:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.760 12:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:51.760 "name": "raid_bdev1", 00:16:51.760 "aliases": [ 00:16:51.760 "b3b3a278-618c-4e9f-9f87-7cb25dad4595" 00:16:51.760 ], 00:16:51.760 "product_name": "Raid Volume", 00:16:51.760 "block_size": 512, 00:16:51.760 "num_blocks": 63488, 00:16:51.760 "uuid": "b3b3a278-618c-4e9f-9f87-7cb25dad4595", 00:16:51.760 "assigned_rate_limits": { 00:16:51.760 "rw_ios_per_sec": 0, 00:16:51.760 "rw_mbytes_per_sec": 0, 00:16:51.760 "r_mbytes_per_sec": 0, 00:16:51.760 "w_mbytes_per_sec": 0 00:16:51.760 }, 00:16:51.760 "claimed": false, 00:16:51.760 "zoned": false, 00:16:51.760 "supported_io_types": { 00:16:51.760 "read": true, 00:16:51.760 "write": true, 00:16:51.760 "unmap": false, 00:16:51.760 "flush": false, 00:16:51.760 "reset": true, 00:16:51.760 "nvme_admin": false, 00:16:51.760 "nvme_io": false, 00:16:51.760 "nvme_io_md": false, 00:16:51.760 "write_zeroes": true, 00:16:51.760 "zcopy": false, 00:16:51.760 "get_zone_info": false, 00:16:51.760 "zone_management": false, 00:16:51.760 "zone_append": false, 00:16:51.760 "compare": false, 00:16:51.760 "compare_and_write": false, 00:16:51.760 "abort": false, 00:16:51.760 "seek_hole": false, 00:16:51.760 "seek_data": false, 00:16:51.760 "copy": false, 00:16:51.760 "nvme_iov_md": false 00:16:51.760 }, 00:16:51.760 "memory_domains": [ 00:16:51.760 { 00:16:51.760 "dma_device_id": "system", 00:16:51.760 "dma_device_type": 1 00:16:51.760 }, 00:16:51.760 { 00:16:51.760 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:51.760 "dma_device_type": 2 00:16:51.760 }, 00:16:51.760 { 00:16:51.760 "dma_device_id": "system", 00:16:51.760 "dma_device_type": 1 00:16:51.760 }, 00:16:51.760 { 00:16:51.760 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:51.760 "dma_device_type": 2 00:16:51.760 }, 00:16:51.760 { 00:16:51.760 "dma_device_id": "system", 00:16:51.760 "dma_device_type": 1 00:16:51.760 }, 00:16:51.760 { 00:16:51.760 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:51.760 "dma_device_type": 2 00:16:51.760 }, 00:16:51.760 { 00:16:51.760 "dma_device_id": "system", 00:16:51.760 "dma_device_type": 1 00:16:51.760 }, 00:16:51.760 { 00:16:51.760 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:51.760 "dma_device_type": 2 00:16:51.760 } 00:16:51.760 ], 00:16:51.760 "driver_specific": { 00:16:51.760 "raid": { 00:16:51.760 "uuid": "b3b3a278-618c-4e9f-9f87-7cb25dad4595", 00:16:51.760 "strip_size_kb": 0, 00:16:51.760 "state": "online", 00:16:51.760 "raid_level": "raid1", 00:16:51.760 "superblock": true, 00:16:51.760 "num_base_bdevs": 4, 00:16:51.760 "num_base_bdevs_discovered": 4, 00:16:51.760 "num_base_bdevs_operational": 4, 00:16:51.760 "base_bdevs_list": [ 00:16:51.760 { 00:16:51.760 "name": "pt1", 00:16:51.760 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:51.760 "is_configured": true, 00:16:51.760 "data_offset": 2048, 00:16:51.760 "data_size": 63488 00:16:51.760 }, 00:16:51.760 { 00:16:51.760 "name": "pt2", 00:16:51.760 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:51.760 "is_configured": true, 00:16:51.760 "data_offset": 2048, 00:16:51.760 "data_size": 63488 00:16:51.760 }, 00:16:51.760 { 00:16:51.760 "name": "pt3", 00:16:51.760 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:51.760 "is_configured": true, 00:16:51.760 "data_offset": 2048, 00:16:51.760 "data_size": 63488 00:16:51.760 }, 00:16:51.760 { 00:16:51.760 "name": "pt4", 00:16:51.760 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:51.760 "is_configured": true, 00:16:51.760 "data_offset": 2048, 00:16:51.760 "data_size": 63488 00:16:51.760 } 00:16:51.760 ] 00:16:51.760 } 00:16:51.760 } 00:16:51.760 }' 00:16:51.760 12:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:51.760 12:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:51.760 pt2 00:16:51.760 pt3 00:16:51.760 pt4' 00:16:51.760 12:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:51.760 12:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:51.760 12:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:51.760 12:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:51.760 12:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:51.760 12:15:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.760 12:15:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.760 12:15:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.760 12:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:51.760 12:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:51.760 12:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:51.760 12:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:51.760 12:15:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.760 12:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:51.760 12:15:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.760 12:15:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.019 12:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:52.019 12:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:52.019 12:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:52.019 12:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:52.019 12:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:52.019 12:15:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.019 12:15:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.019 12:15:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.019 12:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:52.019 12:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:52.019 12:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:52.019 12:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:52.019 12:15:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.019 12:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:52.019 12:15:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.019 12:15:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.019 12:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:52.019 12:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:52.019 12:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:52.019 12:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:52.019 12:15:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.019 12:15:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.019 [2024-11-25 12:15:47.986510] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:52.019 12:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.019 12:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' b3b3a278-618c-4e9f-9f87-7cb25dad4595 '!=' b3b3a278-618c-4e9f-9f87-7cb25dad4595 ']' 00:16:52.019 12:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:16:52.019 12:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:52.019 12:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:52.019 12:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:52.019 12:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.019 12:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.019 [2024-11-25 12:15:48.038217] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:52.019 12:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.019 12:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:52.019 12:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:52.019 12:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:52.019 12:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:52.019 12:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:52.019 12:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:52.019 12:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.019 12:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.019 12:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.019 12:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.019 12:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.019 12:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.019 12:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.019 12:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.019 12:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.019 12:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.019 "name": "raid_bdev1", 00:16:52.019 "uuid": "b3b3a278-618c-4e9f-9f87-7cb25dad4595", 00:16:52.019 "strip_size_kb": 0, 00:16:52.019 "state": "online", 00:16:52.019 "raid_level": "raid1", 00:16:52.019 "superblock": true, 00:16:52.019 "num_base_bdevs": 4, 00:16:52.019 "num_base_bdevs_discovered": 3, 00:16:52.019 "num_base_bdevs_operational": 3, 00:16:52.019 "base_bdevs_list": [ 00:16:52.019 { 00:16:52.019 "name": null, 00:16:52.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.019 "is_configured": false, 00:16:52.019 "data_offset": 0, 00:16:52.019 "data_size": 63488 00:16:52.019 }, 00:16:52.019 { 00:16:52.019 "name": "pt2", 00:16:52.019 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:52.019 "is_configured": true, 00:16:52.019 "data_offset": 2048, 00:16:52.019 "data_size": 63488 00:16:52.019 }, 00:16:52.019 { 00:16:52.019 "name": "pt3", 00:16:52.019 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:52.019 "is_configured": true, 00:16:52.019 "data_offset": 2048, 00:16:52.019 "data_size": 63488 00:16:52.019 }, 00:16:52.019 { 00:16:52.019 "name": "pt4", 00:16:52.019 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:52.019 "is_configured": true, 00:16:52.019 "data_offset": 2048, 00:16:52.019 "data_size": 63488 00:16:52.019 } 00:16:52.019 ] 00:16:52.019 }' 00:16:52.019 12:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.019 12:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.585 12:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:52.585 12:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.585 12:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.585 [2024-11-25 12:15:48.562334] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:52.585 [2024-11-25 12:15:48.562391] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:52.585 [2024-11-25 12:15:48.562510] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:52.585 [2024-11-25 12:15:48.562651] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:52.585 [2024-11-25 12:15:48.562668] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:52.585 12:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.585 12:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.585 12:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.585 12:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:52.585 12:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.585 12:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.585 12:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:52.585 12:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:52.585 12:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:52.585 12:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:52.585 12:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:52.585 12:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.585 12:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.585 12:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.585 12:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:52.585 12:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:52.585 12:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:16:52.585 12:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.585 12:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.585 12:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.585 12:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:52.585 12:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:52.585 12:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:16:52.585 12:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.585 12:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.585 12:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.585 12:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:52.585 12:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:52.585 12:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:52.585 12:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:52.585 12:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:52.585 12:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.585 12:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.585 [2024-11-25 12:15:48.654356] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:52.585 [2024-11-25 12:15:48.654419] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:52.585 [2024-11-25 12:15:48.654448] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:16:52.585 [2024-11-25 12:15:48.654463] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:52.585 [2024-11-25 12:15:48.657517] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:52.585 [2024-11-25 12:15:48.657679] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:52.585 [2024-11-25 12:15:48.657902] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:52.585 [2024-11-25 12:15:48.658072] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:52.585 pt2 00:16:52.585 12:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.585 12:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:16:52.585 12:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:52.585 12:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:52.585 12:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:52.586 12:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:52.586 12:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:52.586 12:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.586 12:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.586 12:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.586 12:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.586 12:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.586 12:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.586 12:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.586 12:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.843 12:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.843 12:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.843 "name": "raid_bdev1", 00:16:52.843 "uuid": "b3b3a278-618c-4e9f-9f87-7cb25dad4595", 00:16:52.843 "strip_size_kb": 0, 00:16:52.843 "state": "configuring", 00:16:52.843 "raid_level": "raid1", 00:16:52.843 "superblock": true, 00:16:52.843 "num_base_bdevs": 4, 00:16:52.843 "num_base_bdevs_discovered": 1, 00:16:52.844 "num_base_bdevs_operational": 3, 00:16:52.844 "base_bdevs_list": [ 00:16:52.844 { 00:16:52.844 "name": null, 00:16:52.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.844 "is_configured": false, 00:16:52.844 "data_offset": 2048, 00:16:52.844 "data_size": 63488 00:16:52.844 }, 00:16:52.844 { 00:16:52.844 "name": "pt2", 00:16:52.844 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:52.844 "is_configured": true, 00:16:52.844 "data_offset": 2048, 00:16:52.844 "data_size": 63488 00:16:52.844 }, 00:16:52.844 { 00:16:52.844 "name": null, 00:16:52.844 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:52.844 "is_configured": false, 00:16:52.844 "data_offset": 2048, 00:16:52.844 "data_size": 63488 00:16:52.844 }, 00:16:52.844 { 00:16:52.844 "name": null, 00:16:52.844 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:52.844 "is_configured": false, 00:16:52.844 "data_offset": 2048, 00:16:52.844 "data_size": 63488 00:16:52.844 } 00:16:52.844 ] 00:16:52.844 }' 00:16:52.844 12:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.844 12:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.102 12:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:53.102 12:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:53.102 12:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:53.102 12:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.102 12:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.102 [2024-11-25 12:15:49.170614] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:53.102 [2024-11-25 12:15:49.170693] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:53.102 [2024-11-25 12:15:49.170731] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:16:53.102 [2024-11-25 12:15:49.170747] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:53.102 [2024-11-25 12:15:49.171305] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:53.102 [2024-11-25 12:15:49.171334] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:53.102 [2024-11-25 12:15:49.171469] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:53.102 [2024-11-25 12:15:49.171502] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:53.102 pt3 00:16:53.102 12:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.102 12:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:16:53.102 12:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:53.102 12:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:53.102 12:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:53.102 12:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:53.102 12:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:53.102 12:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.102 12:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.102 12:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.102 12:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.102 12:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.102 12:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.103 12:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.103 12:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.359 12:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.359 12:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.359 "name": "raid_bdev1", 00:16:53.359 "uuid": "b3b3a278-618c-4e9f-9f87-7cb25dad4595", 00:16:53.359 "strip_size_kb": 0, 00:16:53.359 "state": "configuring", 00:16:53.359 "raid_level": "raid1", 00:16:53.359 "superblock": true, 00:16:53.359 "num_base_bdevs": 4, 00:16:53.359 "num_base_bdevs_discovered": 2, 00:16:53.359 "num_base_bdevs_operational": 3, 00:16:53.359 "base_bdevs_list": [ 00:16:53.359 { 00:16:53.359 "name": null, 00:16:53.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.359 "is_configured": false, 00:16:53.359 "data_offset": 2048, 00:16:53.359 "data_size": 63488 00:16:53.359 }, 00:16:53.359 { 00:16:53.359 "name": "pt2", 00:16:53.359 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:53.359 "is_configured": true, 00:16:53.359 "data_offset": 2048, 00:16:53.359 "data_size": 63488 00:16:53.359 }, 00:16:53.359 { 00:16:53.359 "name": "pt3", 00:16:53.359 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:53.359 "is_configured": true, 00:16:53.359 "data_offset": 2048, 00:16:53.359 "data_size": 63488 00:16:53.359 }, 00:16:53.359 { 00:16:53.359 "name": null, 00:16:53.359 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:53.359 "is_configured": false, 00:16:53.359 "data_offset": 2048, 00:16:53.359 "data_size": 63488 00:16:53.359 } 00:16:53.359 ] 00:16:53.359 }' 00:16:53.359 12:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.359 12:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.619 12:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:53.619 12:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:53.619 12:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:16:53.619 12:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:53.619 12:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.619 12:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.619 [2024-11-25 12:15:49.694785] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:53.619 [2024-11-25 12:15:49.694875] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:53.619 [2024-11-25 12:15:49.694910] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:53.619 [2024-11-25 12:15:49.694932] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:53.619 [2024-11-25 12:15:49.695528] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:53.619 [2024-11-25 12:15:49.695554] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:53.619 [2024-11-25 12:15:49.695660] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:53.619 [2024-11-25 12:15:49.695701] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:53.619 [2024-11-25 12:15:49.695892] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:53.619 [2024-11-25 12:15:49.695908] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:53.619 [2024-11-25 12:15:49.696222] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:53.619 [2024-11-25 12:15:49.696449] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:53.619 [2024-11-25 12:15:49.696473] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:53.619 [2024-11-25 12:15:49.696648] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:53.619 pt4 00:16:53.619 12:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.619 12:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:53.619 12:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:53.619 12:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:53.619 12:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:53.619 12:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:53.619 12:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:53.619 12:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.619 12:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.619 12:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.619 12:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.619 12:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.619 12:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.619 12:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.619 12:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.877 12:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.877 12:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.877 "name": "raid_bdev1", 00:16:53.877 "uuid": "b3b3a278-618c-4e9f-9f87-7cb25dad4595", 00:16:53.877 "strip_size_kb": 0, 00:16:53.877 "state": "online", 00:16:53.877 "raid_level": "raid1", 00:16:53.877 "superblock": true, 00:16:53.877 "num_base_bdevs": 4, 00:16:53.877 "num_base_bdevs_discovered": 3, 00:16:53.877 "num_base_bdevs_operational": 3, 00:16:53.877 "base_bdevs_list": [ 00:16:53.877 { 00:16:53.877 "name": null, 00:16:53.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.877 "is_configured": false, 00:16:53.877 "data_offset": 2048, 00:16:53.878 "data_size": 63488 00:16:53.878 }, 00:16:53.878 { 00:16:53.878 "name": "pt2", 00:16:53.878 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:53.878 "is_configured": true, 00:16:53.878 "data_offset": 2048, 00:16:53.878 "data_size": 63488 00:16:53.878 }, 00:16:53.878 { 00:16:53.878 "name": "pt3", 00:16:53.878 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:53.878 "is_configured": true, 00:16:53.878 "data_offset": 2048, 00:16:53.878 "data_size": 63488 00:16:53.878 }, 00:16:53.878 { 00:16:53.878 "name": "pt4", 00:16:53.878 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:53.878 "is_configured": true, 00:16:53.878 "data_offset": 2048, 00:16:53.878 "data_size": 63488 00:16:53.878 } 00:16:53.878 ] 00:16:53.878 }' 00:16:53.878 12:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.878 12:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.136 12:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:54.136 12:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.136 12:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.395 [2024-11-25 12:15:50.226845] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:54.395 [2024-11-25 12:15:50.226879] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:54.395 [2024-11-25 12:15:50.226981] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:54.395 [2024-11-25 12:15:50.227081] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:54.395 [2024-11-25 12:15:50.227102] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:54.395 12:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.395 12:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.395 12:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.395 12:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:54.395 12:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.395 12:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.395 12:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:54.395 12:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:54.395 12:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:16:54.395 12:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:16:54.395 12:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:16:54.395 12:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.395 12:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.395 12:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.395 12:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:54.395 12:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.395 12:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.395 [2024-11-25 12:15:50.294833] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:54.395 [2024-11-25 12:15:50.294906] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:54.395 [2024-11-25 12:15:50.294934] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:16:54.395 [2024-11-25 12:15:50.294950] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:54.395 [2024-11-25 12:15:50.298070] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:54.395 [2024-11-25 12:15:50.298274] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:54.395 [2024-11-25 12:15:50.298526] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:54.395 [2024-11-25 12:15:50.298716] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:54.395 [2024-11-25 12:15:50.299017] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:54.395 [2024-11-25 12:15:50.299049] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:54.395 [2024-11-25 12:15:50.299071] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:16:54.395 [2024-11-25 12:15:50.299155] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:54.395 [2024-11-25 12:15:50.299362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:54.395 pt1 00:16:54.395 12:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.395 12:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:16:54.395 12:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:16:54.395 12:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:54.395 12:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:54.395 12:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:54.395 12:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:54.395 12:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:54.395 12:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.395 12:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.395 12:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.395 12:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.395 12:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.395 12:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.395 12:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.395 12:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.395 12:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.395 12:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.395 "name": "raid_bdev1", 00:16:54.395 "uuid": "b3b3a278-618c-4e9f-9f87-7cb25dad4595", 00:16:54.395 "strip_size_kb": 0, 00:16:54.395 "state": "configuring", 00:16:54.395 "raid_level": "raid1", 00:16:54.395 "superblock": true, 00:16:54.395 "num_base_bdevs": 4, 00:16:54.395 "num_base_bdevs_discovered": 2, 00:16:54.395 "num_base_bdevs_operational": 3, 00:16:54.395 "base_bdevs_list": [ 00:16:54.395 { 00:16:54.395 "name": null, 00:16:54.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.395 "is_configured": false, 00:16:54.395 "data_offset": 2048, 00:16:54.395 "data_size": 63488 00:16:54.395 }, 00:16:54.395 { 00:16:54.395 "name": "pt2", 00:16:54.395 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:54.395 "is_configured": true, 00:16:54.395 "data_offset": 2048, 00:16:54.395 "data_size": 63488 00:16:54.395 }, 00:16:54.395 { 00:16:54.395 "name": "pt3", 00:16:54.395 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:54.395 "is_configured": true, 00:16:54.395 "data_offset": 2048, 00:16:54.395 "data_size": 63488 00:16:54.395 }, 00:16:54.395 { 00:16:54.395 "name": null, 00:16:54.395 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:54.395 "is_configured": false, 00:16:54.395 "data_offset": 2048, 00:16:54.395 "data_size": 63488 00:16:54.395 } 00:16:54.395 ] 00:16:54.395 }' 00:16:54.395 12:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.395 12:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.768 12:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:16:54.768 12:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:54.768 12:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.768 12:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.768 12:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.027 12:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:16:55.028 12:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:55.028 12:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.028 12:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.028 [2024-11-25 12:15:50.867245] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:55.028 [2024-11-25 12:15:50.867321] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:55.028 [2024-11-25 12:15:50.867368] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:16:55.028 [2024-11-25 12:15:50.867387] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:55.028 [2024-11-25 12:15:50.867925] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:55.028 [2024-11-25 12:15:50.867959] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:55.028 [2024-11-25 12:15:50.868063] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:55.028 [2024-11-25 12:15:50.868108] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:55.028 [2024-11-25 12:15:50.868277] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:16:55.028 [2024-11-25 12:15:50.868293] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:55.028 [2024-11-25 12:15:50.868639] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:55.028 [2024-11-25 12:15:50.868827] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:16:55.028 [2024-11-25 12:15:50.868849] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:16:55.028 [2024-11-25 12:15:50.869024] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:55.028 pt4 00:16:55.028 12:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.028 12:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:55.028 12:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:55.028 12:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:55.028 12:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:55.028 12:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:55.028 12:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:55.028 12:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.028 12:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.028 12:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.028 12:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.028 12:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.028 12:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.028 12:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.028 12:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.028 12:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.028 12:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.028 "name": "raid_bdev1", 00:16:55.028 "uuid": "b3b3a278-618c-4e9f-9f87-7cb25dad4595", 00:16:55.028 "strip_size_kb": 0, 00:16:55.028 "state": "online", 00:16:55.028 "raid_level": "raid1", 00:16:55.028 "superblock": true, 00:16:55.028 "num_base_bdevs": 4, 00:16:55.028 "num_base_bdevs_discovered": 3, 00:16:55.028 "num_base_bdevs_operational": 3, 00:16:55.028 "base_bdevs_list": [ 00:16:55.028 { 00:16:55.028 "name": null, 00:16:55.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.028 "is_configured": false, 00:16:55.028 "data_offset": 2048, 00:16:55.028 "data_size": 63488 00:16:55.028 }, 00:16:55.028 { 00:16:55.028 "name": "pt2", 00:16:55.028 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:55.028 "is_configured": true, 00:16:55.028 "data_offset": 2048, 00:16:55.028 "data_size": 63488 00:16:55.028 }, 00:16:55.028 { 00:16:55.028 "name": "pt3", 00:16:55.028 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:55.028 "is_configured": true, 00:16:55.028 "data_offset": 2048, 00:16:55.028 "data_size": 63488 00:16:55.028 }, 00:16:55.028 { 00:16:55.028 "name": "pt4", 00:16:55.028 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:55.028 "is_configured": true, 00:16:55.028 "data_offset": 2048, 00:16:55.028 "data_size": 63488 00:16:55.028 } 00:16:55.028 ] 00:16:55.028 }' 00:16:55.028 12:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.028 12:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.598 12:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:55.598 12:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:55.598 12:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.598 12:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.598 12:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.598 12:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:55.598 12:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:55.598 12:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:55.598 12:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.598 12:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.598 [2024-11-25 12:15:51.439814] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:55.598 12:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.598 12:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' b3b3a278-618c-4e9f-9f87-7cb25dad4595 '!=' b3b3a278-618c-4e9f-9f87-7cb25dad4595 ']' 00:16:55.598 12:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74678 00:16:55.598 12:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74678 ']' 00:16:55.598 12:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74678 00:16:55.598 12:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:16:55.598 12:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:55.598 12:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74678 00:16:55.598 killing process with pid 74678 00:16:55.598 12:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:55.598 12:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:55.598 12:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74678' 00:16:55.598 12:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74678 00:16:55.598 [2024-11-25 12:15:51.512189] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:55.598 12:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74678 00:16:55.598 [2024-11-25 12:15:51.512328] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:55.598 [2024-11-25 12:15:51.512449] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:55.598 [2024-11-25 12:15:51.512473] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:16:55.857 [2024-11-25 12:15:51.869190] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:57.236 ************************************ 00:16:57.236 END TEST raid_superblock_test 00:16:57.236 ************************************ 00:16:57.236 12:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:57.236 00:16:57.236 real 0m9.341s 00:16:57.236 user 0m15.385s 00:16:57.236 sys 0m1.356s 00:16:57.236 12:15:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:57.236 12:15:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.236 12:15:52 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:16:57.236 12:15:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:57.236 12:15:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:57.236 12:15:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:57.236 ************************************ 00:16:57.236 START TEST raid_read_error_test 00:16:57.236 ************************************ 00:16:57.236 12:15:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:16:57.236 12:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:16:57.236 12:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:16:57.236 12:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:16:57.236 12:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:16:57.236 12:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:57.236 12:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:16:57.236 12:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:57.236 12:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:57.236 12:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:16:57.236 12:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:57.236 12:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:57.236 12:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:16:57.236 12:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:57.236 12:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:57.236 12:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:16:57.236 12:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:57.236 12:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:57.236 12:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:57.236 12:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:16:57.236 12:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:16:57.236 12:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:16:57.236 12:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:16:57.236 12:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:16:57.236 12:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:16:57.236 12:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:16:57.236 12:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:16:57.236 12:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:16:57.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:57.236 12:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ZQWsv7ZuNk 00:16:57.236 12:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75176 00:16:57.236 12:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75176 00:16:57.236 12:15:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 75176 ']' 00:16:57.236 12:15:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.236 12:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:57.236 12:15:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:57.236 12:15:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.236 12:15:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:57.236 12:15:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.236 [2024-11-25 12:15:53.060156] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:16:57.236 [2024-11-25 12:15:53.060357] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75176 ] 00:16:57.236 [2024-11-25 12:15:53.236078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.495 [2024-11-25 12:15:53.369117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.495 [2024-11-25 12:15:53.578918] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:57.495 [2024-11-25 12:15:53.579132] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:58.063 12:15:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:58.063 12:15:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:16:58.063 12:15:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:58.063 12:15:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:58.063 12:15:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.063 12:15:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.063 BaseBdev1_malloc 00:16:58.063 12:15:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.063 12:15:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:16:58.063 12:15:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.063 12:15:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.063 true 00:16:58.063 12:15:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.063 12:15:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:58.063 12:15:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.063 12:15:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.063 [2024-11-25 12:15:54.101980] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:58.063 [2024-11-25 12:15:54.102055] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:58.063 [2024-11-25 12:15:54.102134] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:16:58.063 [2024-11-25 12:15:54.102175] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:58.063 [2024-11-25 12:15:54.105359] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:58.063 [2024-11-25 12:15:54.105427] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:58.063 BaseBdev1 00:16:58.063 12:15:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.063 12:15:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:58.063 12:15:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:58.063 12:15:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.063 12:15:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.063 BaseBdev2_malloc 00:16:58.063 12:15:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.063 12:15:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:16:58.063 12:15:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.063 12:15:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.323 true 00:16:58.323 12:15:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.323 12:15:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:58.323 12:15:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.323 12:15:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.323 [2024-11-25 12:15:54.159583] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:58.323 [2024-11-25 12:15:54.159658] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:58.323 [2024-11-25 12:15:54.159706] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:58.323 [2024-11-25 12:15:54.159740] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:58.323 [2024-11-25 12:15:54.162636] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:58.323 [2024-11-25 12:15:54.162839] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:58.323 BaseBdev2 00:16:58.323 12:15:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.323 12:15:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:58.323 12:15:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:58.323 12:15:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.323 12:15:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.323 BaseBdev3_malloc 00:16:58.323 12:15:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.323 12:15:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:16:58.323 12:15:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.323 12:15:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.323 true 00:16:58.323 12:15:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.323 12:15:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:58.323 12:15:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.323 12:15:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.323 [2024-11-25 12:15:54.233752] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:58.323 [2024-11-25 12:15:54.233820] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:58.323 [2024-11-25 12:15:54.233852] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:58.323 [2024-11-25 12:15:54.233880] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:58.323 [2024-11-25 12:15:54.237308] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:58.323 [2024-11-25 12:15:54.237394] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:58.323 BaseBdev3 00:16:58.323 12:15:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.323 12:15:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:58.323 12:15:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:58.323 12:15:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.323 12:15:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.323 BaseBdev4_malloc 00:16:58.323 12:15:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.323 12:15:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:16:58.323 12:15:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.323 12:15:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.323 true 00:16:58.323 12:15:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.323 12:15:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:16:58.323 12:15:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.323 12:15:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.323 [2024-11-25 12:15:54.294365] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:16:58.323 [2024-11-25 12:15:54.294431] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:58.323 [2024-11-25 12:15:54.294461] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:58.323 [2024-11-25 12:15:54.294480] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:58.323 [2024-11-25 12:15:54.297235] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:58.323 [2024-11-25 12:15:54.297299] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:58.323 BaseBdev4 00:16:58.323 12:15:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.323 12:15:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:16:58.323 12:15:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.323 12:15:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.323 [2024-11-25 12:15:54.302419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:58.323 [2024-11-25 12:15:54.304981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:58.323 [2024-11-25 12:15:54.305219] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:58.323 [2024-11-25 12:15:54.305393] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:58.323 [2024-11-25 12:15:54.305747] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:16:58.323 [2024-11-25 12:15:54.305898] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:58.323 [2024-11-25 12:15:54.306386] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:16:58.323 [2024-11-25 12:15:54.306752] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:16:58.324 [2024-11-25 12:15:54.306877] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:16:58.324 [2024-11-25 12:15:54.307300] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:58.324 12:15:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.324 12:15:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:58.324 12:15:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:58.324 12:15:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:58.324 12:15:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:58.324 12:15:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:58.324 12:15:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:58.324 12:15:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.324 12:15:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.324 12:15:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.324 12:15:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.324 12:15:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.324 12:15:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.324 12:15:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.324 12:15:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.324 12:15:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.324 12:15:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:58.324 "name": "raid_bdev1", 00:16:58.324 "uuid": "d7a50677-ce8b-4056-8eb3-18e0f659d920", 00:16:58.324 "strip_size_kb": 0, 00:16:58.324 "state": "online", 00:16:58.324 "raid_level": "raid1", 00:16:58.324 "superblock": true, 00:16:58.324 "num_base_bdevs": 4, 00:16:58.324 "num_base_bdevs_discovered": 4, 00:16:58.324 "num_base_bdevs_operational": 4, 00:16:58.324 "base_bdevs_list": [ 00:16:58.324 { 00:16:58.324 "name": "BaseBdev1", 00:16:58.324 "uuid": "a92126f7-97f1-5256-8550-23bf23ab9ef5", 00:16:58.324 "is_configured": true, 00:16:58.324 "data_offset": 2048, 00:16:58.324 "data_size": 63488 00:16:58.324 }, 00:16:58.324 { 00:16:58.324 "name": "BaseBdev2", 00:16:58.324 "uuid": "3870ee24-cb1f-5b28-b89c-002e9c58f8dc", 00:16:58.324 "is_configured": true, 00:16:58.324 "data_offset": 2048, 00:16:58.324 "data_size": 63488 00:16:58.324 }, 00:16:58.324 { 00:16:58.324 "name": "BaseBdev3", 00:16:58.324 "uuid": "5038d9e8-4d28-5caf-9401-b2d82c2c8fda", 00:16:58.324 "is_configured": true, 00:16:58.324 "data_offset": 2048, 00:16:58.324 "data_size": 63488 00:16:58.324 }, 00:16:58.324 { 00:16:58.324 "name": "BaseBdev4", 00:16:58.324 "uuid": "e383b6b3-b4b8-58ad-aaf2-714ac1a1c671", 00:16:58.324 "is_configured": true, 00:16:58.324 "data_offset": 2048, 00:16:58.324 "data_size": 63488 00:16:58.324 } 00:16:58.324 ] 00:16:58.324 }' 00:16:58.324 12:15:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:58.324 12:15:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.892 12:15:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:16:58.892 12:15:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:58.892 [2024-11-25 12:15:54.892896] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:16:59.828 12:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:16:59.828 12:15:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.828 12:15:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.828 12:15:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.828 12:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:16:59.828 12:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:16:59.828 12:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:16:59.828 12:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:16:59.828 12:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:59.828 12:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:59.828 12:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:59.828 12:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:59.828 12:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:59.828 12:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:59.828 12:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.828 12:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.828 12:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.828 12:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.828 12:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.828 12:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.828 12:15:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.828 12:15:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.828 12:15:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.828 12:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.828 "name": "raid_bdev1", 00:16:59.828 "uuid": "d7a50677-ce8b-4056-8eb3-18e0f659d920", 00:16:59.828 "strip_size_kb": 0, 00:16:59.828 "state": "online", 00:16:59.828 "raid_level": "raid1", 00:16:59.828 "superblock": true, 00:16:59.828 "num_base_bdevs": 4, 00:16:59.828 "num_base_bdevs_discovered": 4, 00:16:59.828 "num_base_bdevs_operational": 4, 00:16:59.828 "base_bdevs_list": [ 00:16:59.828 { 00:16:59.828 "name": "BaseBdev1", 00:16:59.828 "uuid": "a92126f7-97f1-5256-8550-23bf23ab9ef5", 00:16:59.828 "is_configured": true, 00:16:59.828 "data_offset": 2048, 00:16:59.828 "data_size": 63488 00:16:59.828 }, 00:16:59.828 { 00:16:59.828 "name": "BaseBdev2", 00:16:59.828 "uuid": "3870ee24-cb1f-5b28-b89c-002e9c58f8dc", 00:16:59.828 "is_configured": true, 00:16:59.828 "data_offset": 2048, 00:16:59.828 "data_size": 63488 00:16:59.828 }, 00:16:59.828 { 00:16:59.828 "name": "BaseBdev3", 00:16:59.828 "uuid": "5038d9e8-4d28-5caf-9401-b2d82c2c8fda", 00:16:59.828 "is_configured": true, 00:16:59.828 "data_offset": 2048, 00:16:59.828 "data_size": 63488 00:16:59.828 }, 00:16:59.828 { 00:16:59.828 "name": "BaseBdev4", 00:16:59.828 "uuid": "e383b6b3-b4b8-58ad-aaf2-714ac1a1c671", 00:16:59.828 "is_configured": true, 00:16:59.828 "data_offset": 2048, 00:16:59.828 "data_size": 63488 00:16:59.828 } 00:16:59.828 ] 00:16:59.828 }' 00:16:59.828 12:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.828 12:15:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.407 12:15:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:00.407 12:15:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.407 12:15:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.407 [2024-11-25 12:15:56.309177] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:00.407 [2024-11-25 12:15:56.309216] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:00.407 [2024-11-25 12:15:56.312813] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:00.407 [2024-11-25 12:15:56.313037] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:00.407 [2024-11-25 12:15:56.313389] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:00.407 [2024-11-25 12:15:56.313563] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, sta{ 00:17:00.407 "results": [ 00:17:00.407 { 00:17:00.407 "job": "raid_bdev1", 00:17:00.407 "core_mask": "0x1", 00:17:00.407 "workload": "randrw", 00:17:00.407 "percentage": 50, 00:17:00.407 "status": "finished", 00:17:00.407 "queue_depth": 1, 00:17:00.407 "io_size": 131072, 00:17:00.407 "runtime": 1.413727, 00:17:00.407 "iops": 7582.086216079908, 00:17:00.407 "mibps": 947.7607770099885, 00:17:00.407 "io_failed": 0, 00:17:00.407 "io_timeout": 0, 00:17:00.407 "avg_latency_us": 127.70035739426167, 00:17:00.407 "min_latency_us": 41.89090909090909, 00:17:00.407 "max_latency_us": 1951.1854545454546 00:17:00.407 } 00:17:00.407 ], 00:17:00.407 "core_count": 1 00:17:00.407 } 00:17:00.407 te offline 00:17:00.407 12:15:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.407 12:15:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75176 00:17:00.407 12:15:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 75176 ']' 00:17:00.407 12:15:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 75176 00:17:00.407 12:15:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:17:00.407 12:15:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:00.407 12:15:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75176 00:17:00.407 12:15:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:00.407 12:15:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:00.407 12:15:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75176' 00:17:00.407 killing process with pid 75176 00:17:00.407 12:15:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 75176 00:17:00.407 [2024-11-25 12:15:56.361919] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:00.407 12:15:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 75176 00:17:00.665 [2024-11-25 12:15:56.646365] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:02.042 12:15:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ZQWsv7ZuNk 00:17:02.042 12:15:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:17:02.042 12:15:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:17:02.042 12:15:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:17:02.042 ************************************ 00:17:02.042 END TEST raid_read_error_test 00:17:02.042 ************************************ 00:17:02.042 12:15:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:17:02.042 12:15:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:02.042 12:15:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:02.042 12:15:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:17:02.042 00:17:02.042 real 0m4.802s 00:17:02.042 user 0m5.849s 00:17:02.042 sys 0m0.598s 00:17:02.042 12:15:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:02.042 12:15:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.042 12:15:57 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:17:02.042 12:15:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:02.042 12:15:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:02.042 12:15:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:02.042 ************************************ 00:17:02.042 START TEST raid_write_error_test 00:17:02.042 ************************************ 00:17:02.042 12:15:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:17:02.042 12:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:17:02.042 12:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:17:02.042 12:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:17:02.042 12:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:17:02.042 12:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:02.042 12:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:17:02.042 12:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:02.042 12:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:02.042 12:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:17:02.042 12:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:02.042 12:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:02.042 12:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:17:02.042 12:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:02.042 12:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:02.042 12:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:17:02.042 12:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:02.042 12:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:02.042 12:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:02.042 12:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:17:02.042 12:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:17:02.042 12:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:17:02.042 12:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:17:02.042 12:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:17:02.042 12:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:17:02.042 12:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:17:02.042 12:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:17:02.042 12:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:17:02.042 12:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.DJg8vrOose 00:17:02.042 12:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75322 00:17:02.042 12:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75322 00:17:02.042 12:15:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 75322 ']' 00:17:02.042 12:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:02.042 12:15:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:02.042 12:15:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:02.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:02.042 12:15:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:02.042 12:15:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:02.042 12:15:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.042 [2024-11-25 12:15:57.916879] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:17:02.042 [2024-11-25 12:15:57.917065] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75322 ] 00:17:02.042 [2024-11-25 12:15:58.104447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:02.300 [2024-11-25 12:15:58.234700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:02.558 [2024-11-25 12:15:58.439045] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:02.558 [2024-11-25 12:15:58.439087] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:02.816 12:15:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:02.816 12:15:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:17:02.816 12:15:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:02.816 12:15:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:02.816 12:15:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.816 12:15:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.074 BaseBdev1_malloc 00:17:03.074 12:15:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.074 12:15:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:17:03.074 12:15:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.074 12:15:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.074 true 00:17:03.074 12:15:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.074 12:15:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:03.075 12:15:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.075 12:15:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.075 [2024-11-25 12:15:58.955344] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:03.075 [2024-11-25 12:15:58.955439] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:03.075 [2024-11-25 12:15:58.955473] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:17:03.075 [2024-11-25 12:15:58.955491] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:03.075 [2024-11-25 12:15:58.958284] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:03.075 [2024-11-25 12:15:58.958354] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:03.075 BaseBdev1 00:17:03.075 12:15:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.075 12:15:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:03.075 12:15:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:03.075 12:15:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.075 12:15:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.075 BaseBdev2_malloc 00:17:03.075 12:15:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.075 12:15:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:17:03.075 12:15:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.075 12:15:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.075 true 00:17:03.075 12:15:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.075 12:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:03.075 12:15:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.075 12:15:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.075 [2024-11-25 12:15:59.015352] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:03.075 [2024-11-25 12:15:59.015431] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:03.075 [2024-11-25 12:15:59.015467] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:17:03.075 [2024-11-25 12:15:59.015485] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:03.075 [2024-11-25 12:15:59.018203] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:03.075 [2024-11-25 12:15:59.018256] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:03.075 BaseBdev2 00:17:03.075 12:15:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.075 12:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:03.075 12:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:03.075 12:15:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.075 12:15:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.075 BaseBdev3_malloc 00:17:03.075 12:15:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.075 12:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:17:03.075 12:15:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.075 12:15:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.075 true 00:17:03.075 12:15:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.075 12:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:17:03.075 12:15:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.075 12:15:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.075 [2024-11-25 12:15:59.085467] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:17:03.075 [2024-11-25 12:15:59.085684] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:03.075 [2024-11-25 12:15:59.085726] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:03.075 [2024-11-25 12:15:59.085746] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:03.075 [2024-11-25 12:15:59.088567] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:03.075 [2024-11-25 12:15:59.088618] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:03.075 BaseBdev3 00:17:03.075 12:15:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.075 12:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:03.075 12:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:03.075 12:15:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.075 12:15:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.075 BaseBdev4_malloc 00:17:03.075 12:15:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.075 12:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:17:03.075 12:15:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.075 12:15:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.075 true 00:17:03.075 12:15:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.075 12:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:17:03.075 12:15:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.075 12:15:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.075 [2024-11-25 12:15:59.141988] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:17:03.075 [2024-11-25 12:15:59.142056] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:03.075 [2024-11-25 12:15:59.142083] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:03.075 [2024-11-25 12:15:59.142110] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:03.075 [2024-11-25 12:15:59.144914] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:03.075 [2024-11-25 12:15:59.144968] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:03.075 BaseBdev4 00:17:03.075 12:15:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.075 12:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:17:03.075 12:15:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.075 12:15:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.075 [2024-11-25 12:15:59.150052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:03.075 [2024-11-25 12:15:59.152606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:03.075 [2024-11-25 12:15:59.152719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:03.075 [2024-11-25 12:15:59.152829] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:03.075 [2024-11-25 12:15:59.153132] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:17:03.075 [2024-11-25 12:15:59.153155] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:03.075 [2024-11-25 12:15:59.153621] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:17:03.075 [2024-11-25 12:15:59.153910] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:17:03.075 [2024-11-25 12:15:59.153966] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:17:03.075 [2024-11-25 12:15:59.154381] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:03.075 12:15:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.075 12:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:17:03.075 12:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:03.075 12:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:03.075 12:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:03.075 12:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:03.075 12:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:03.075 12:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:03.075 12:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:03.075 12:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:03.075 12:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:03.075 12:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.075 12:15:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.075 12:15:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.075 12:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.333 12:15:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.333 12:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:03.333 "name": "raid_bdev1", 00:17:03.333 "uuid": "52a9408f-3bc2-4ffc-a433-5418935f2cbe", 00:17:03.333 "strip_size_kb": 0, 00:17:03.333 "state": "online", 00:17:03.333 "raid_level": "raid1", 00:17:03.333 "superblock": true, 00:17:03.333 "num_base_bdevs": 4, 00:17:03.333 "num_base_bdevs_discovered": 4, 00:17:03.333 "num_base_bdevs_operational": 4, 00:17:03.333 "base_bdevs_list": [ 00:17:03.333 { 00:17:03.333 "name": "BaseBdev1", 00:17:03.333 "uuid": "10bfb901-c8e1-5168-9fe2-6ac8f60509c8", 00:17:03.333 "is_configured": true, 00:17:03.333 "data_offset": 2048, 00:17:03.333 "data_size": 63488 00:17:03.333 }, 00:17:03.333 { 00:17:03.333 "name": "BaseBdev2", 00:17:03.333 "uuid": "38a66b11-61fb-566f-9772-411ef5dfcfa7", 00:17:03.333 "is_configured": true, 00:17:03.333 "data_offset": 2048, 00:17:03.333 "data_size": 63488 00:17:03.333 }, 00:17:03.333 { 00:17:03.333 "name": "BaseBdev3", 00:17:03.333 "uuid": "c96f1464-64e4-58f3-b3bb-d20280c40670", 00:17:03.333 "is_configured": true, 00:17:03.333 "data_offset": 2048, 00:17:03.333 "data_size": 63488 00:17:03.333 }, 00:17:03.333 { 00:17:03.333 "name": "BaseBdev4", 00:17:03.333 "uuid": "f452945d-522d-5527-aa0b-51a963c1a203", 00:17:03.333 "is_configured": true, 00:17:03.333 "data_offset": 2048, 00:17:03.333 "data_size": 63488 00:17:03.333 } 00:17:03.333 ] 00:17:03.333 }' 00:17:03.333 12:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:03.333 12:15:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.591 12:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:17:03.591 12:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:17:03.864 [2024-11-25 12:15:59.795927] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:17:04.806 12:16:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:17:04.806 12:16:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.806 12:16:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.806 [2024-11-25 12:16:00.675938] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:17:04.806 [2024-11-25 12:16:00.676007] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:04.806 [2024-11-25 12:16:00.676281] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:17:04.806 12:16:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.806 12:16:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:17:04.806 12:16:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:17:04.806 12:16:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:17:04.806 12:16:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:17:04.806 12:16:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:04.806 12:16:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:04.806 12:16:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:04.806 12:16:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:04.806 12:16:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:04.806 12:16:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:04.806 12:16:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:04.806 12:16:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:04.806 12:16:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:04.806 12:16:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:04.806 12:16:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.806 12:16:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.806 12:16:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.806 12:16:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.806 12:16:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.806 12:16:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:04.806 "name": "raid_bdev1", 00:17:04.806 "uuid": "52a9408f-3bc2-4ffc-a433-5418935f2cbe", 00:17:04.806 "strip_size_kb": 0, 00:17:04.806 "state": "online", 00:17:04.806 "raid_level": "raid1", 00:17:04.806 "superblock": true, 00:17:04.806 "num_base_bdevs": 4, 00:17:04.806 "num_base_bdevs_discovered": 3, 00:17:04.806 "num_base_bdevs_operational": 3, 00:17:04.806 "base_bdevs_list": [ 00:17:04.806 { 00:17:04.806 "name": null, 00:17:04.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.806 "is_configured": false, 00:17:04.806 "data_offset": 0, 00:17:04.806 "data_size": 63488 00:17:04.806 }, 00:17:04.806 { 00:17:04.806 "name": "BaseBdev2", 00:17:04.806 "uuid": "38a66b11-61fb-566f-9772-411ef5dfcfa7", 00:17:04.806 "is_configured": true, 00:17:04.806 "data_offset": 2048, 00:17:04.806 "data_size": 63488 00:17:04.806 }, 00:17:04.806 { 00:17:04.806 "name": "BaseBdev3", 00:17:04.806 "uuid": "c96f1464-64e4-58f3-b3bb-d20280c40670", 00:17:04.806 "is_configured": true, 00:17:04.806 "data_offset": 2048, 00:17:04.806 "data_size": 63488 00:17:04.806 }, 00:17:04.806 { 00:17:04.806 "name": "BaseBdev4", 00:17:04.806 "uuid": "f452945d-522d-5527-aa0b-51a963c1a203", 00:17:04.806 "is_configured": true, 00:17:04.806 "data_offset": 2048, 00:17:04.806 "data_size": 63488 00:17:04.806 } 00:17:04.806 ] 00:17:04.806 }' 00:17:04.806 12:16:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:04.806 12:16:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.373 12:16:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:05.373 12:16:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.373 12:16:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.373 [2024-11-25 12:16:01.199568] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:05.373 [2024-11-25 12:16:01.199606] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:05.373 [2024-11-25 12:16:01.202857] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:05.373 [2024-11-25 12:16:01.203067] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:05.373 [2024-11-25 12:16:01.203225] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:05.373 [2024-11-25 12:16:01.203248] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:17:05.373 { 00:17:05.373 "results": [ 00:17:05.373 { 00:17:05.373 "job": "raid_bdev1", 00:17:05.373 "core_mask": "0x1", 00:17:05.373 "workload": "randrw", 00:17:05.373 "percentage": 50, 00:17:05.373 "status": "finished", 00:17:05.373 "queue_depth": 1, 00:17:05.373 "io_size": 131072, 00:17:05.373 "runtime": 1.401084, 00:17:05.373 "iops": 8393.501032058035, 00:17:05.373 "mibps": 1049.1876290072544, 00:17:05.373 "io_failed": 0, 00:17:05.373 "io_timeout": 0, 00:17:05.373 "avg_latency_us": 114.81625726654296, 00:17:05.373 "min_latency_us": 43.28727272727273, 00:17:05.373 "max_latency_us": 1817.1345454545456 00:17:05.373 } 00:17:05.373 ], 00:17:05.373 "core_count": 1 00:17:05.373 } 00:17:05.373 12:16:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.373 12:16:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75322 00:17:05.373 12:16:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 75322 ']' 00:17:05.373 12:16:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 75322 00:17:05.373 12:16:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:17:05.373 12:16:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:05.373 12:16:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75322 00:17:05.373 killing process with pid 75322 00:17:05.373 12:16:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:05.373 12:16:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:05.373 12:16:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75322' 00:17:05.373 12:16:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 75322 00:17:05.373 12:16:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 75322 00:17:05.373 [2024-11-25 12:16:01.237216] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:05.632 [2024-11-25 12:16:01.524586] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:06.566 12:16:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.DJg8vrOose 00:17:06.566 12:16:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:17:06.566 12:16:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:17:06.566 12:16:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:17:06.566 12:16:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:17:06.566 12:16:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:06.566 12:16:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:06.566 12:16:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:17:06.566 00:17:06.566 real 0m4.830s 00:17:06.566 user 0m5.953s 00:17:06.566 sys 0m0.593s 00:17:06.566 ************************************ 00:17:06.566 END TEST raid_write_error_test 00:17:06.566 12:16:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:06.566 12:16:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.566 ************************************ 00:17:06.825 12:16:02 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:17:06.825 12:16:02 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:17:06.825 12:16:02 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:17:06.825 12:16:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:06.825 12:16:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:06.825 12:16:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:06.825 ************************************ 00:17:06.825 START TEST raid_rebuild_test 00:17:06.825 ************************************ 00:17:06.825 12:16:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:17:06.825 12:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:06.825 12:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:06.825 12:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:17:06.825 12:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:06.825 12:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:06.825 12:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:06.825 12:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:06.825 12:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:06.825 12:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:06.825 12:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:06.825 12:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:06.825 12:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:06.825 12:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:06.825 12:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:06.825 12:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:06.825 12:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:06.825 12:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:06.825 12:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:06.825 12:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:06.825 12:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:06.825 12:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:06.825 12:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:06.825 12:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:17:06.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:06.825 12:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75460 00:17:06.825 12:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75460 00:17:06.825 12:16:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75460 ']' 00:17:06.825 12:16:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.825 12:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:06.826 12:16:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:06.826 12:16:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.826 12:16:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:06.826 12:16:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.826 [2024-11-25 12:16:02.795823] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:17:06.826 [2024-11-25 12:16:02.796695] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75460 ] 00:17:06.826 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:06.826 Zero copy mechanism will not be used. 00:17:07.085 [2024-11-25 12:16:02.981552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:07.085 [2024-11-25 12:16:03.111393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:07.343 [2024-11-25 12:16:03.316711] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:07.343 [2024-11-25 12:16:03.316960] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:07.909 12:16:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:07.909 12:16:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:17:07.909 12:16:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:07.909 12:16:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:07.909 12:16:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.909 12:16:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.909 BaseBdev1_malloc 00:17:07.909 12:16:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.909 12:16:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:07.909 12:16:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.909 12:16:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.909 [2024-11-25 12:16:03.844482] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:07.909 [2024-11-25 12:16:03.844567] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:07.909 [2024-11-25 12:16:03.844599] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:07.909 [2024-11-25 12:16:03.844618] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:07.909 [2024-11-25 12:16:03.847466] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:07.909 [2024-11-25 12:16:03.847520] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:07.909 BaseBdev1 00:17:07.909 12:16:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.909 12:16:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:07.909 12:16:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:07.909 12:16:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.909 12:16:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.909 BaseBdev2_malloc 00:17:07.909 12:16:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.909 12:16:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:07.909 12:16:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.909 12:16:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.909 [2024-11-25 12:16:03.897285] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:07.909 [2024-11-25 12:16:03.897416] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:07.909 [2024-11-25 12:16:03.897462] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:07.909 [2024-11-25 12:16:03.897493] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:07.910 [2024-11-25 12:16:03.900290] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:07.910 [2024-11-25 12:16:03.900357] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:07.910 BaseBdev2 00:17:07.910 12:16:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.910 12:16:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:07.910 12:16:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.910 12:16:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.910 spare_malloc 00:17:07.910 12:16:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.910 12:16:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:07.910 12:16:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.910 12:16:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.910 spare_delay 00:17:07.910 12:16:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.910 12:16:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:07.910 12:16:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.910 12:16:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.910 [2024-11-25 12:16:03.966992] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:07.910 [2024-11-25 12:16:03.967200] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:07.910 [2024-11-25 12:16:03.967243] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:07.910 [2024-11-25 12:16:03.967264] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:07.910 [2024-11-25 12:16:03.970022] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:07.910 [2024-11-25 12:16:03.970065] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:07.910 spare 00:17:07.910 12:16:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.910 12:16:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:07.910 12:16:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.910 12:16:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.910 [2024-11-25 12:16:03.979059] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:07.910 [2024-11-25 12:16:03.981672] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:07.910 [2024-11-25 12:16:03.981914] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:07.910 [2024-11-25 12:16:03.981981] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:17:07.910 [2024-11-25 12:16:03.982446] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:07.910 [2024-11-25 12:16:03.982785] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:07.910 [2024-11-25 12:16:03.982913] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:07.910 [2024-11-25 12:16:03.983305] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:07.910 12:16:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.910 12:16:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:07.910 12:16:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:07.910 12:16:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:07.910 12:16:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:07.910 12:16:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:07.910 12:16:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:07.910 12:16:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.910 12:16:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.910 12:16:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.910 12:16:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.910 12:16:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.910 12:16:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.910 12:16:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.910 12:16:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.168 12:16:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.168 12:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.168 "name": "raid_bdev1", 00:17:08.168 "uuid": "2901ac3d-fa90-4221-9024-df293a816008", 00:17:08.168 "strip_size_kb": 0, 00:17:08.168 "state": "online", 00:17:08.168 "raid_level": "raid1", 00:17:08.168 "superblock": false, 00:17:08.168 "num_base_bdevs": 2, 00:17:08.168 "num_base_bdevs_discovered": 2, 00:17:08.168 "num_base_bdevs_operational": 2, 00:17:08.168 "base_bdevs_list": [ 00:17:08.168 { 00:17:08.168 "name": "BaseBdev1", 00:17:08.168 "uuid": "adaacb8f-afa7-538a-a208-1f45232f1703", 00:17:08.168 "is_configured": true, 00:17:08.168 "data_offset": 0, 00:17:08.168 "data_size": 65536 00:17:08.168 }, 00:17:08.168 { 00:17:08.168 "name": "BaseBdev2", 00:17:08.168 "uuid": "20ac1dc4-c19c-540f-9785-fccf9cb9b92f", 00:17:08.168 "is_configured": true, 00:17:08.168 "data_offset": 0, 00:17:08.168 "data_size": 65536 00:17:08.168 } 00:17:08.168 ] 00:17:08.168 }' 00:17:08.168 12:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.168 12:16:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.426 12:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:08.426 12:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:08.426 12:16:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.426 12:16:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.426 [2024-11-25 12:16:04.487776] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:08.426 12:16:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.684 12:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:17:08.684 12:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.684 12:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:08.684 12:16:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.684 12:16:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.684 12:16:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.684 12:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:17:08.684 12:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:08.684 12:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:08.684 12:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:08.684 12:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:08.684 12:16:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:08.684 12:16:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:08.684 12:16:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:08.684 12:16:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:08.684 12:16:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:08.684 12:16:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:08.684 12:16:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:08.684 12:16:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:08.684 12:16:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:08.943 [2024-11-25 12:16:04.927600] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:08.943 /dev/nbd0 00:17:08.943 12:16:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:08.943 12:16:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:08.943 12:16:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:08.943 12:16:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:08.943 12:16:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:08.943 12:16:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:08.943 12:16:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:08.943 12:16:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:08.943 12:16:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:08.943 12:16:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:08.943 12:16:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:08.944 1+0 records in 00:17:08.944 1+0 records out 00:17:08.944 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000660298 s, 6.2 MB/s 00:17:08.944 12:16:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:08.944 12:16:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:08.944 12:16:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:08.944 12:16:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:08.944 12:16:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:08.944 12:16:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:08.944 12:16:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:08.944 12:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:08.944 12:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:08.944 12:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:17:15.534 65536+0 records in 00:17:15.534 65536+0 records out 00:17:15.534 33554432 bytes (34 MB, 32 MiB) copied, 6.55482 s, 5.1 MB/s 00:17:15.534 12:16:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:15.534 12:16:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:15.534 12:16:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:15.534 12:16:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:15.534 12:16:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:15.534 12:16:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:15.534 12:16:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:15.793 [2024-11-25 12:16:11.844004] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:15.793 12:16:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:15.793 12:16:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:15.793 12:16:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:15.793 12:16:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:15.793 12:16:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:15.793 12:16:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:15.793 12:16:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:15.793 12:16:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:15.793 12:16:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:15.793 12:16:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.793 12:16:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.793 [2024-11-25 12:16:11.862075] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:15.793 12:16:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.793 12:16:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:15.793 12:16:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:15.793 12:16:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:15.793 12:16:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:15.793 12:16:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:15.793 12:16:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:15.793 12:16:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:15.793 12:16:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:15.793 12:16:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:15.793 12:16:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:15.793 12:16:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.793 12:16:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.793 12:16:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.793 12:16:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.052 12:16:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.052 12:16:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:16.052 "name": "raid_bdev1", 00:17:16.052 "uuid": "2901ac3d-fa90-4221-9024-df293a816008", 00:17:16.052 "strip_size_kb": 0, 00:17:16.052 "state": "online", 00:17:16.052 "raid_level": "raid1", 00:17:16.052 "superblock": false, 00:17:16.052 "num_base_bdevs": 2, 00:17:16.052 "num_base_bdevs_discovered": 1, 00:17:16.052 "num_base_bdevs_operational": 1, 00:17:16.052 "base_bdevs_list": [ 00:17:16.052 { 00:17:16.052 "name": null, 00:17:16.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.052 "is_configured": false, 00:17:16.052 "data_offset": 0, 00:17:16.052 "data_size": 65536 00:17:16.052 }, 00:17:16.052 { 00:17:16.052 "name": "BaseBdev2", 00:17:16.052 "uuid": "20ac1dc4-c19c-540f-9785-fccf9cb9b92f", 00:17:16.052 "is_configured": true, 00:17:16.052 "data_offset": 0, 00:17:16.052 "data_size": 65536 00:17:16.052 } 00:17:16.052 ] 00:17:16.052 }' 00:17:16.052 12:16:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:16.052 12:16:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.311 12:16:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:16.311 12:16:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.311 12:16:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.311 [2024-11-25 12:16:12.358327] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:16.311 [2024-11-25 12:16:12.375299] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:17:16.311 12:16:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.311 12:16:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:16.311 [2024-11-25 12:16:12.377931] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:17.688 12:16:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:17.688 12:16:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:17.688 12:16:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:17.688 12:16:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:17.688 12:16:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:17.688 12:16:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.688 12:16:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.688 12:16:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.688 12:16:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.688 12:16:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.688 12:16:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:17.688 "name": "raid_bdev1", 00:17:17.688 "uuid": "2901ac3d-fa90-4221-9024-df293a816008", 00:17:17.688 "strip_size_kb": 0, 00:17:17.688 "state": "online", 00:17:17.688 "raid_level": "raid1", 00:17:17.688 "superblock": false, 00:17:17.688 "num_base_bdevs": 2, 00:17:17.688 "num_base_bdevs_discovered": 2, 00:17:17.688 "num_base_bdevs_operational": 2, 00:17:17.688 "process": { 00:17:17.688 "type": "rebuild", 00:17:17.688 "target": "spare", 00:17:17.688 "progress": { 00:17:17.688 "blocks": 20480, 00:17:17.688 "percent": 31 00:17:17.688 } 00:17:17.688 }, 00:17:17.688 "base_bdevs_list": [ 00:17:17.688 { 00:17:17.688 "name": "spare", 00:17:17.688 "uuid": "8e4ffaae-5fe7-5ce1-ae79-548295484ccb", 00:17:17.688 "is_configured": true, 00:17:17.688 "data_offset": 0, 00:17:17.688 "data_size": 65536 00:17:17.688 }, 00:17:17.688 { 00:17:17.688 "name": "BaseBdev2", 00:17:17.688 "uuid": "20ac1dc4-c19c-540f-9785-fccf9cb9b92f", 00:17:17.688 "is_configured": true, 00:17:17.688 "data_offset": 0, 00:17:17.688 "data_size": 65536 00:17:17.688 } 00:17:17.688 ] 00:17:17.688 }' 00:17:17.688 12:16:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:17.688 12:16:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:17.688 12:16:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:17.688 12:16:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:17.688 12:16:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:17.688 12:16:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.688 12:16:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.688 [2024-11-25 12:16:13.539618] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:17.688 [2024-11-25 12:16:13.586885] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:17.688 [2024-11-25 12:16:13.586982] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:17.688 [2024-11-25 12:16:13.587006] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:17.688 [2024-11-25 12:16:13.587021] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:17.688 12:16:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.688 12:16:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:17.688 12:16:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:17.688 12:16:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:17.688 12:16:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:17.688 12:16:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:17.688 12:16:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:17.688 12:16:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:17.688 12:16:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:17.688 12:16:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:17.688 12:16:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:17.688 12:16:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.688 12:16:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.688 12:16:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.688 12:16:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.688 12:16:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.688 12:16:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:17.688 "name": "raid_bdev1", 00:17:17.688 "uuid": "2901ac3d-fa90-4221-9024-df293a816008", 00:17:17.688 "strip_size_kb": 0, 00:17:17.688 "state": "online", 00:17:17.688 "raid_level": "raid1", 00:17:17.688 "superblock": false, 00:17:17.688 "num_base_bdevs": 2, 00:17:17.688 "num_base_bdevs_discovered": 1, 00:17:17.688 "num_base_bdevs_operational": 1, 00:17:17.688 "base_bdevs_list": [ 00:17:17.688 { 00:17:17.688 "name": null, 00:17:17.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.688 "is_configured": false, 00:17:17.688 "data_offset": 0, 00:17:17.688 "data_size": 65536 00:17:17.688 }, 00:17:17.688 { 00:17:17.688 "name": "BaseBdev2", 00:17:17.688 "uuid": "20ac1dc4-c19c-540f-9785-fccf9cb9b92f", 00:17:17.688 "is_configured": true, 00:17:17.688 "data_offset": 0, 00:17:17.688 "data_size": 65536 00:17:17.688 } 00:17:17.688 ] 00:17:17.688 }' 00:17:17.688 12:16:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:17.688 12:16:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.256 12:16:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:18.256 12:16:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:18.256 12:16:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:18.256 12:16:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:18.256 12:16:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:18.256 12:16:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.256 12:16:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.256 12:16:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.256 12:16:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.256 12:16:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.256 12:16:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:18.256 "name": "raid_bdev1", 00:17:18.256 "uuid": "2901ac3d-fa90-4221-9024-df293a816008", 00:17:18.256 "strip_size_kb": 0, 00:17:18.256 "state": "online", 00:17:18.256 "raid_level": "raid1", 00:17:18.256 "superblock": false, 00:17:18.256 "num_base_bdevs": 2, 00:17:18.256 "num_base_bdevs_discovered": 1, 00:17:18.256 "num_base_bdevs_operational": 1, 00:17:18.256 "base_bdevs_list": [ 00:17:18.256 { 00:17:18.256 "name": null, 00:17:18.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.256 "is_configured": false, 00:17:18.256 "data_offset": 0, 00:17:18.256 "data_size": 65536 00:17:18.256 }, 00:17:18.256 { 00:17:18.256 "name": "BaseBdev2", 00:17:18.256 "uuid": "20ac1dc4-c19c-540f-9785-fccf9cb9b92f", 00:17:18.256 "is_configured": true, 00:17:18.256 "data_offset": 0, 00:17:18.256 "data_size": 65536 00:17:18.256 } 00:17:18.256 ] 00:17:18.256 }' 00:17:18.256 12:16:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:18.256 12:16:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:18.256 12:16:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:18.256 12:16:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:18.256 12:16:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:18.256 12:16:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.256 12:16:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.256 [2024-11-25 12:16:14.311491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:18.256 [2024-11-25 12:16:14.327202] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:17:18.256 12:16:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.256 12:16:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:18.256 [2024-11-25 12:16:14.329701] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:19.658 12:16:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:19.658 12:16:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:19.658 12:16:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:19.658 12:16:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:19.658 12:16:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:19.658 12:16:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.658 12:16:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.658 12:16:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.658 12:16:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.658 12:16:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.658 12:16:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:19.658 "name": "raid_bdev1", 00:17:19.658 "uuid": "2901ac3d-fa90-4221-9024-df293a816008", 00:17:19.658 "strip_size_kb": 0, 00:17:19.658 "state": "online", 00:17:19.658 "raid_level": "raid1", 00:17:19.658 "superblock": false, 00:17:19.658 "num_base_bdevs": 2, 00:17:19.658 "num_base_bdevs_discovered": 2, 00:17:19.658 "num_base_bdevs_operational": 2, 00:17:19.658 "process": { 00:17:19.658 "type": "rebuild", 00:17:19.658 "target": "spare", 00:17:19.658 "progress": { 00:17:19.658 "blocks": 20480, 00:17:19.658 "percent": 31 00:17:19.658 } 00:17:19.658 }, 00:17:19.658 "base_bdevs_list": [ 00:17:19.658 { 00:17:19.658 "name": "spare", 00:17:19.658 "uuid": "8e4ffaae-5fe7-5ce1-ae79-548295484ccb", 00:17:19.658 "is_configured": true, 00:17:19.658 "data_offset": 0, 00:17:19.658 "data_size": 65536 00:17:19.658 }, 00:17:19.658 { 00:17:19.658 "name": "BaseBdev2", 00:17:19.658 "uuid": "20ac1dc4-c19c-540f-9785-fccf9cb9b92f", 00:17:19.658 "is_configured": true, 00:17:19.658 "data_offset": 0, 00:17:19.658 "data_size": 65536 00:17:19.658 } 00:17:19.658 ] 00:17:19.658 }' 00:17:19.658 12:16:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:19.658 12:16:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:19.658 12:16:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:19.658 12:16:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:19.658 12:16:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:17:19.658 12:16:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:19.658 12:16:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:19.658 12:16:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:19.658 12:16:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=394 00:17:19.658 12:16:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:19.658 12:16:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:19.658 12:16:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:19.658 12:16:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:19.658 12:16:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:19.658 12:16:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:19.658 12:16:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.658 12:16:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.658 12:16:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.658 12:16:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.658 12:16:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.658 12:16:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:19.658 "name": "raid_bdev1", 00:17:19.658 "uuid": "2901ac3d-fa90-4221-9024-df293a816008", 00:17:19.658 "strip_size_kb": 0, 00:17:19.658 "state": "online", 00:17:19.658 "raid_level": "raid1", 00:17:19.658 "superblock": false, 00:17:19.658 "num_base_bdevs": 2, 00:17:19.658 "num_base_bdevs_discovered": 2, 00:17:19.658 "num_base_bdevs_operational": 2, 00:17:19.658 "process": { 00:17:19.658 "type": "rebuild", 00:17:19.658 "target": "spare", 00:17:19.658 "progress": { 00:17:19.658 "blocks": 22528, 00:17:19.658 "percent": 34 00:17:19.658 } 00:17:19.658 }, 00:17:19.658 "base_bdevs_list": [ 00:17:19.658 { 00:17:19.658 "name": "spare", 00:17:19.658 "uuid": "8e4ffaae-5fe7-5ce1-ae79-548295484ccb", 00:17:19.658 "is_configured": true, 00:17:19.658 "data_offset": 0, 00:17:19.658 "data_size": 65536 00:17:19.658 }, 00:17:19.658 { 00:17:19.658 "name": "BaseBdev2", 00:17:19.658 "uuid": "20ac1dc4-c19c-540f-9785-fccf9cb9b92f", 00:17:19.658 "is_configured": true, 00:17:19.658 "data_offset": 0, 00:17:19.658 "data_size": 65536 00:17:19.658 } 00:17:19.658 ] 00:17:19.658 }' 00:17:19.658 12:16:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:19.658 12:16:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:19.658 12:16:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:19.658 12:16:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:19.658 12:16:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:20.594 12:16:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:20.594 12:16:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:20.594 12:16:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:20.594 12:16:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:20.594 12:16:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:20.594 12:16:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:20.594 12:16:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.594 12:16:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.594 12:16:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.594 12:16:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.594 12:16:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.594 12:16:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:20.594 "name": "raid_bdev1", 00:17:20.594 "uuid": "2901ac3d-fa90-4221-9024-df293a816008", 00:17:20.594 "strip_size_kb": 0, 00:17:20.594 "state": "online", 00:17:20.594 "raid_level": "raid1", 00:17:20.594 "superblock": false, 00:17:20.594 "num_base_bdevs": 2, 00:17:20.594 "num_base_bdevs_discovered": 2, 00:17:20.594 "num_base_bdevs_operational": 2, 00:17:20.594 "process": { 00:17:20.594 "type": "rebuild", 00:17:20.594 "target": "spare", 00:17:20.594 "progress": { 00:17:20.594 "blocks": 47104, 00:17:20.594 "percent": 71 00:17:20.594 } 00:17:20.594 }, 00:17:20.594 "base_bdevs_list": [ 00:17:20.594 { 00:17:20.594 "name": "spare", 00:17:20.594 "uuid": "8e4ffaae-5fe7-5ce1-ae79-548295484ccb", 00:17:20.594 "is_configured": true, 00:17:20.594 "data_offset": 0, 00:17:20.594 "data_size": 65536 00:17:20.594 }, 00:17:20.594 { 00:17:20.594 "name": "BaseBdev2", 00:17:20.594 "uuid": "20ac1dc4-c19c-540f-9785-fccf9cb9b92f", 00:17:20.594 "is_configured": true, 00:17:20.594 "data_offset": 0, 00:17:20.594 "data_size": 65536 00:17:20.594 } 00:17:20.594 ] 00:17:20.594 }' 00:17:20.594 12:16:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:20.867 12:16:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:20.867 12:16:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:20.867 12:16:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:20.867 12:16:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:21.802 [2024-11-25 12:16:17.553148] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:21.802 [2024-11-25 12:16:17.553246] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:21.802 [2024-11-25 12:16:17.553315] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:21.802 12:16:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:21.802 12:16:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:21.802 12:16:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:21.802 12:16:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:21.802 12:16:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:21.802 12:16:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:21.802 12:16:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.802 12:16:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.802 12:16:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.802 12:16:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.802 12:16:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.802 12:16:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:21.802 "name": "raid_bdev1", 00:17:21.802 "uuid": "2901ac3d-fa90-4221-9024-df293a816008", 00:17:21.802 "strip_size_kb": 0, 00:17:21.802 "state": "online", 00:17:21.802 "raid_level": "raid1", 00:17:21.802 "superblock": false, 00:17:21.802 "num_base_bdevs": 2, 00:17:21.802 "num_base_bdevs_discovered": 2, 00:17:21.802 "num_base_bdevs_operational": 2, 00:17:21.802 "base_bdevs_list": [ 00:17:21.802 { 00:17:21.802 "name": "spare", 00:17:21.802 "uuid": "8e4ffaae-5fe7-5ce1-ae79-548295484ccb", 00:17:21.802 "is_configured": true, 00:17:21.802 "data_offset": 0, 00:17:21.802 "data_size": 65536 00:17:21.802 }, 00:17:21.802 { 00:17:21.802 "name": "BaseBdev2", 00:17:21.802 "uuid": "20ac1dc4-c19c-540f-9785-fccf9cb9b92f", 00:17:21.802 "is_configured": true, 00:17:21.802 "data_offset": 0, 00:17:21.802 "data_size": 65536 00:17:21.802 } 00:17:21.802 ] 00:17:21.802 }' 00:17:21.802 12:16:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:22.061 12:16:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:22.061 12:16:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:22.061 12:16:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:22.061 12:16:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:17:22.061 12:16:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:22.061 12:16:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:22.061 12:16:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:22.061 12:16:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:22.061 12:16:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:22.061 12:16:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.061 12:16:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.061 12:16:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.061 12:16:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.061 12:16:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.061 12:16:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:22.061 "name": "raid_bdev1", 00:17:22.061 "uuid": "2901ac3d-fa90-4221-9024-df293a816008", 00:17:22.061 "strip_size_kb": 0, 00:17:22.061 "state": "online", 00:17:22.061 "raid_level": "raid1", 00:17:22.061 "superblock": false, 00:17:22.061 "num_base_bdevs": 2, 00:17:22.061 "num_base_bdevs_discovered": 2, 00:17:22.061 "num_base_bdevs_operational": 2, 00:17:22.061 "base_bdevs_list": [ 00:17:22.061 { 00:17:22.061 "name": "spare", 00:17:22.061 "uuid": "8e4ffaae-5fe7-5ce1-ae79-548295484ccb", 00:17:22.061 "is_configured": true, 00:17:22.061 "data_offset": 0, 00:17:22.061 "data_size": 65536 00:17:22.061 }, 00:17:22.061 { 00:17:22.061 "name": "BaseBdev2", 00:17:22.061 "uuid": "20ac1dc4-c19c-540f-9785-fccf9cb9b92f", 00:17:22.061 "is_configured": true, 00:17:22.061 "data_offset": 0, 00:17:22.061 "data_size": 65536 00:17:22.061 } 00:17:22.061 ] 00:17:22.061 }' 00:17:22.061 12:16:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:22.061 12:16:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:22.061 12:16:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:22.061 12:16:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:22.061 12:16:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:22.061 12:16:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:22.061 12:16:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:22.061 12:16:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:22.061 12:16:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:22.061 12:16:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:22.061 12:16:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:22.061 12:16:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:22.061 12:16:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:22.061 12:16:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:22.321 12:16:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.321 12:16:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.321 12:16:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.321 12:16:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.321 12:16:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.321 12:16:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:22.321 "name": "raid_bdev1", 00:17:22.321 "uuid": "2901ac3d-fa90-4221-9024-df293a816008", 00:17:22.321 "strip_size_kb": 0, 00:17:22.321 "state": "online", 00:17:22.321 "raid_level": "raid1", 00:17:22.321 "superblock": false, 00:17:22.321 "num_base_bdevs": 2, 00:17:22.321 "num_base_bdevs_discovered": 2, 00:17:22.321 "num_base_bdevs_operational": 2, 00:17:22.321 "base_bdevs_list": [ 00:17:22.321 { 00:17:22.321 "name": "spare", 00:17:22.321 "uuid": "8e4ffaae-5fe7-5ce1-ae79-548295484ccb", 00:17:22.321 "is_configured": true, 00:17:22.321 "data_offset": 0, 00:17:22.321 "data_size": 65536 00:17:22.321 }, 00:17:22.321 { 00:17:22.321 "name": "BaseBdev2", 00:17:22.321 "uuid": "20ac1dc4-c19c-540f-9785-fccf9cb9b92f", 00:17:22.321 "is_configured": true, 00:17:22.321 "data_offset": 0, 00:17:22.321 "data_size": 65536 00:17:22.321 } 00:17:22.321 ] 00:17:22.321 }' 00:17:22.321 12:16:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:22.321 12:16:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.579 12:16:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:22.579 12:16:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.579 12:16:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.579 [2024-11-25 12:16:18.661627] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:22.579 [2024-11-25 12:16:18.661667] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:22.580 [2024-11-25 12:16:18.661777] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:22.580 [2024-11-25 12:16:18.661894] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:22.580 [2024-11-25 12:16:18.661912] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:22.580 12:16:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.838 12:16:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.838 12:16:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.838 12:16:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.838 12:16:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:17:22.838 12:16:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.838 12:16:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:22.838 12:16:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:22.838 12:16:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:22.838 12:16:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:22.838 12:16:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:22.838 12:16:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:22.838 12:16:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:22.838 12:16:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:22.838 12:16:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:22.838 12:16:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:22.838 12:16:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:22.838 12:16:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:22.838 12:16:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:23.097 /dev/nbd0 00:17:23.097 12:16:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:23.097 12:16:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:23.097 12:16:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:23.097 12:16:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:23.097 12:16:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:23.097 12:16:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:23.097 12:16:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:23.097 12:16:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:23.097 12:16:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:23.097 12:16:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:23.097 12:16:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:23.097 1+0 records in 00:17:23.097 1+0 records out 00:17:23.097 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000603983 s, 6.8 MB/s 00:17:23.097 12:16:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:23.097 12:16:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:23.097 12:16:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:23.097 12:16:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:23.097 12:16:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:23.097 12:16:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:23.097 12:16:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:23.097 12:16:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:23.354 /dev/nbd1 00:17:23.354 12:16:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:23.354 12:16:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:23.354 12:16:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:23.354 12:16:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:23.354 12:16:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:23.354 12:16:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:23.354 12:16:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:23.354 12:16:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:23.354 12:16:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:23.354 12:16:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:23.354 12:16:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:23.354 1+0 records in 00:17:23.354 1+0 records out 00:17:23.354 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000404884 s, 10.1 MB/s 00:17:23.354 12:16:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:23.354 12:16:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:23.354 12:16:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:23.354 12:16:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:23.354 12:16:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:23.354 12:16:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:23.354 12:16:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:23.354 12:16:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:23.613 12:16:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:23.613 12:16:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:23.613 12:16:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:23.613 12:16:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:23.613 12:16:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:23.613 12:16:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:23.613 12:16:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:23.870 12:16:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:23.870 12:16:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:23.870 12:16:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:23.870 12:16:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:23.870 12:16:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:23.870 12:16:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:23.870 12:16:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:23.870 12:16:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:23.870 12:16:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:23.870 12:16:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:24.434 12:16:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:24.434 12:16:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:24.435 12:16:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:24.435 12:16:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:24.435 12:16:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:24.435 12:16:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:24.435 12:16:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:24.435 12:16:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:24.435 12:16:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:17:24.435 12:16:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75460 00:17:24.435 12:16:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75460 ']' 00:17:24.435 12:16:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75460 00:17:24.435 12:16:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:17:24.435 12:16:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:24.435 12:16:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75460 00:17:24.435 killing process with pid 75460 00:17:24.435 Received shutdown signal, test time was about 60.000000 seconds 00:17:24.435 00:17:24.435 Latency(us) 00:17:24.435 [2024-11-25T12:16:20.526Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:24.435 [2024-11-25T12:16:20.526Z] =================================================================================================================== 00:17:24.435 [2024-11-25T12:16:20.526Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:24.435 12:16:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:24.435 12:16:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:24.435 12:16:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75460' 00:17:24.435 12:16:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75460 00:17:24.435 [2024-11-25 12:16:20.280008] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:24.435 12:16:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75460 00:17:24.723 [2024-11-25 12:16:20.542469] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:25.657 ************************************ 00:17:25.657 END TEST raid_rebuild_test 00:17:25.657 ************************************ 00:17:25.657 12:16:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:17:25.657 00:17:25.657 real 0m18.896s 00:17:25.657 user 0m21.791s 00:17:25.657 sys 0m3.525s 00:17:25.657 12:16:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:25.657 12:16:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.657 12:16:21 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:17:25.657 12:16:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:25.657 12:16:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:25.657 12:16:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:25.657 ************************************ 00:17:25.657 START TEST raid_rebuild_test_sb 00:17:25.657 ************************************ 00:17:25.657 12:16:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:17:25.657 12:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:25.657 12:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:25.657 12:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:25.657 12:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:25.657 12:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:25.657 12:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:25.657 12:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:25.657 12:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:25.657 12:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:25.657 12:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:25.657 12:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:25.657 12:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:25.657 12:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:25.657 12:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:25.657 12:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:25.657 12:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:25.657 12:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:25.657 12:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:25.657 12:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:25.657 12:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:25.657 12:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:25.657 12:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:25.657 12:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:25.657 12:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:25.657 12:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75918 00:17:25.657 12:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75918 00:17:25.657 12:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:25.657 12:16:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 75918 ']' 00:17:25.657 12:16:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:25.657 12:16:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:25.657 12:16:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:25.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:25.657 12:16:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:25.657 12:16:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.657 [2024-11-25 12:16:21.738632] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:17:25.657 [2024-11-25 12:16:21.738997] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75918 ] 00:17:25.657 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:25.657 Zero copy mechanism will not be used. 00:17:25.916 [2024-11-25 12:16:21.914512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:26.175 [2024-11-25 12:16:22.049012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:26.175 [2024-11-25 12:16:22.259293] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:26.175 [2024-11-25 12:16:22.259505] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:26.742 12:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:26.742 12:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:17:26.742 12:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:26.742 12:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:26.742 12:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.742 12:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.742 BaseBdev1_malloc 00:17:26.742 12:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.742 12:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:26.742 12:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.742 12:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.742 [2024-11-25 12:16:22.818463] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:26.742 [2024-11-25 12:16:22.818768] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:26.742 [2024-11-25 12:16:22.818813] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:26.742 [2024-11-25 12:16:22.818835] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:26.742 [2024-11-25 12:16:22.821817] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:26.742 [2024-11-25 12:16:22.822002] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:26.742 BaseBdev1 00:17:26.742 12:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.742 12:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:26.742 12:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:26.742 12:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.742 12:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.001 BaseBdev2_malloc 00:17:27.001 12:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.001 12:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:27.001 12:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.001 12:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.001 [2024-11-25 12:16:22.867983] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:27.001 [2024-11-25 12:16:22.868058] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:27.001 [2024-11-25 12:16:22.868088] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:27.001 [2024-11-25 12:16:22.868109] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:27.002 [2024-11-25 12:16:22.871044] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:27.002 [2024-11-25 12:16:22.871229] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:27.002 BaseBdev2 00:17:27.002 12:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.002 12:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:27.002 12:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.002 12:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.002 spare_malloc 00:17:27.002 12:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.002 12:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:27.002 12:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.002 12:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.002 spare_delay 00:17:27.002 12:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.002 12:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:27.002 12:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.002 12:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.002 [2024-11-25 12:16:22.940237] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:27.002 [2024-11-25 12:16:22.940464] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:27.002 [2024-11-25 12:16:22.940506] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:27.002 [2024-11-25 12:16:22.940526] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:27.002 [2024-11-25 12:16:22.943292] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:27.002 [2024-11-25 12:16:22.943355] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:27.002 spare 00:17:27.002 12:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.002 12:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:27.002 12:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.002 12:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.002 [2024-11-25 12:16:22.948331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:27.002 [2024-11-25 12:16:22.950771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:27.002 [2024-11-25 12:16:22.951141] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:27.002 [2024-11-25 12:16:22.951174] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:27.002 [2024-11-25 12:16:22.951521] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:27.002 [2024-11-25 12:16:22.951755] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:27.002 [2024-11-25 12:16:22.951779] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:27.002 [2024-11-25 12:16:22.951967] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:27.002 12:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.002 12:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:27.002 12:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:27.002 12:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:27.002 12:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:27.002 12:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:27.002 12:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:27.002 12:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:27.002 12:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:27.002 12:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:27.002 12:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:27.002 12:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.002 12:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.002 12:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.002 12:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.002 12:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.002 12:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:27.002 "name": "raid_bdev1", 00:17:27.002 "uuid": "46e1d7c2-1e59-45c4-af85-17203dd3c5e6", 00:17:27.002 "strip_size_kb": 0, 00:17:27.002 "state": "online", 00:17:27.002 "raid_level": "raid1", 00:17:27.002 "superblock": true, 00:17:27.002 "num_base_bdevs": 2, 00:17:27.002 "num_base_bdevs_discovered": 2, 00:17:27.002 "num_base_bdevs_operational": 2, 00:17:27.002 "base_bdevs_list": [ 00:17:27.002 { 00:17:27.002 "name": "BaseBdev1", 00:17:27.002 "uuid": "1df22257-acf4-558e-8ee7-c1281942856c", 00:17:27.002 "is_configured": true, 00:17:27.002 "data_offset": 2048, 00:17:27.002 "data_size": 63488 00:17:27.002 }, 00:17:27.002 { 00:17:27.002 "name": "BaseBdev2", 00:17:27.002 "uuid": "7f5a2b06-38f9-51ba-88e9-01451c574dda", 00:17:27.002 "is_configured": true, 00:17:27.002 "data_offset": 2048, 00:17:27.002 "data_size": 63488 00:17:27.002 } 00:17:27.002 ] 00:17:27.002 }' 00:17:27.002 12:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:27.002 12:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.571 12:16:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:27.571 12:16:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:27.571 12:16:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.571 12:16:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.571 [2024-11-25 12:16:23.456914] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:27.571 12:16:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.571 12:16:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:17:27.571 12:16:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:27.571 12:16:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.571 12:16:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.571 12:16:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.571 12:16:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.571 12:16:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:17:27.571 12:16:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:27.571 12:16:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:27.571 12:16:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:27.571 12:16:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:27.571 12:16:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:27.571 12:16:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:27.571 12:16:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:27.571 12:16:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:27.571 12:16:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:27.571 12:16:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:27.571 12:16:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:27.571 12:16:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:27.571 12:16:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:27.830 [2024-11-25 12:16:23.844643] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:27.830 /dev/nbd0 00:17:27.830 12:16:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:27.830 12:16:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:27.830 12:16:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:27.830 12:16:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:27.830 12:16:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:27.830 12:16:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:27.830 12:16:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:27.830 12:16:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:27.830 12:16:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:27.830 12:16:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:27.830 12:16:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:27.830 1+0 records in 00:17:27.830 1+0 records out 00:17:27.830 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000286419 s, 14.3 MB/s 00:17:27.830 12:16:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:27.830 12:16:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:27.830 12:16:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:27.830 12:16:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:27.830 12:16:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:27.830 12:16:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:27.830 12:16:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:27.830 12:16:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:27.830 12:16:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:27.830 12:16:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:17:34.399 63488+0 records in 00:17:34.399 63488+0 records out 00:17:34.399 32505856 bytes (33 MB, 31 MiB) copied, 6.27898 s, 5.2 MB/s 00:17:34.399 12:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:34.399 12:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:34.399 12:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:34.399 12:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:34.399 12:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:34.399 12:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:34.399 12:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:34.399 [2024-11-25 12:16:30.469678] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:34.657 12:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:34.657 12:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:34.657 12:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:34.657 12:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:34.657 12:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:34.657 12:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:34.657 12:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:34.657 12:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:34.657 12:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:34.657 12:16:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.657 12:16:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.657 [2024-11-25 12:16:30.501785] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:34.657 12:16:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.657 12:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:34.657 12:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:34.657 12:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:34.657 12:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:34.657 12:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:34.657 12:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:34.657 12:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:34.657 12:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:34.657 12:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:34.657 12:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:34.657 12:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.657 12:16:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.658 12:16:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.658 12:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.658 12:16:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.658 12:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:34.658 "name": "raid_bdev1", 00:17:34.658 "uuid": "46e1d7c2-1e59-45c4-af85-17203dd3c5e6", 00:17:34.658 "strip_size_kb": 0, 00:17:34.658 "state": "online", 00:17:34.658 "raid_level": "raid1", 00:17:34.658 "superblock": true, 00:17:34.658 "num_base_bdevs": 2, 00:17:34.658 "num_base_bdevs_discovered": 1, 00:17:34.658 "num_base_bdevs_operational": 1, 00:17:34.658 "base_bdevs_list": [ 00:17:34.658 { 00:17:34.658 "name": null, 00:17:34.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.658 "is_configured": false, 00:17:34.658 "data_offset": 0, 00:17:34.658 "data_size": 63488 00:17:34.658 }, 00:17:34.658 { 00:17:34.658 "name": "BaseBdev2", 00:17:34.658 "uuid": "7f5a2b06-38f9-51ba-88e9-01451c574dda", 00:17:34.658 "is_configured": true, 00:17:34.658 "data_offset": 2048, 00:17:34.658 "data_size": 63488 00:17:34.658 } 00:17:34.658 ] 00:17:34.658 }' 00:17:34.658 12:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:34.658 12:16:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.916 12:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:34.916 12:16:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.916 12:16:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.916 [2024-11-25 12:16:30.993947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:35.175 [2024-11-25 12:16:31.010618] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:17:35.175 12:16:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.175 12:16:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:35.175 [2024-11-25 12:16:31.013241] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:36.111 12:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:36.111 12:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:36.111 12:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:36.111 12:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:36.111 12:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:36.111 12:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.111 12:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.111 12:16:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.111 12:16:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.111 12:16:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.111 12:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:36.111 "name": "raid_bdev1", 00:17:36.111 "uuid": "46e1d7c2-1e59-45c4-af85-17203dd3c5e6", 00:17:36.111 "strip_size_kb": 0, 00:17:36.111 "state": "online", 00:17:36.111 "raid_level": "raid1", 00:17:36.111 "superblock": true, 00:17:36.111 "num_base_bdevs": 2, 00:17:36.111 "num_base_bdevs_discovered": 2, 00:17:36.111 "num_base_bdevs_operational": 2, 00:17:36.111 "process": { 00:17:36.111 "type": "rebuild", 00:17:36.111 "target": "spare", 00:17:36.111 "progress": { 00:17:36.111 "blocks": 20480, 00:17:36.111 "percent": 32 00:17:36.111 } 00:17:36.111 }, 00:17:36.111 "base_bdevs_list": [ 00:17:36.111 { 00:17:36.111 "name": "spare", 00:17:36.111 "uuid": "34a1c6b6-0259-5f06-bf9c-b3b765b5e4b3", 00:17:36.111 "is_configured": true, 00:17:36.111 "data_offset": 2048, 00:17:36.111 "data_size": 63488 00:17:36.111 }, 00:17:36.111 { 00:17:36.111 "name": "BaseBdev2", 00:17:36.111 "uuid": "7f5a2b06-38f9-51ba-88e9-01451c574dda", 00:17:36.111 "is_configured": true, 00:17:36.111 "data_offset": 2048, 00:17:36.111 "data_size": 63488 00:17:36.111 } 00:17:36.111 ] 00:17:36.111 }' 00:17:36.111 12:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:36.111 12:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:36.111 12:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:36.111 12:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:36.111 12:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:36.111 12:16:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.111 12:16:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.111 [2024-11-25 12:16:32.183793] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:36.370 [2024-11-25 12:16:32.223748] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:36.370 [2024-11-25 12:16:32.223847] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:36.370 [2024-11-25 12:16:32.223876] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:36.370 [2024-11-25 12:16:32.223900] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:36.370 12:16:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.370 12:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:36.370 12:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:36.370 12:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:36.370 12:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:36.370 12:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:36.370 12:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:36.370 12:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.370 12:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.370 12:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.370 12:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.370 12:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.370 12:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.370 12:16:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.370 12:16:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.370 12:16:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.370 12:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:36.370 "name": "raid_bdev1", 00:17:36.370 "uuid": "46e1d7c2-1e59-45c4-af85-17203dd3c5e6", 00:17:36.370 "strip_size_kb": 0, 00:17:36.370 "state": "online", 00:17:36.370 "raid_level": "raid1", 00:17:36.370 "superblock": true, 00:17:36.370 "num_base_bdevs": 2, 00:17:36.370 "num_base_bdevs_discovered": 1, 00:17:36.371 "num_base_bdevs_operational": 1, 00:17:36.371 "base_bdevs_list": [ 00:17:36.371 { 00:17:36.371 "name": null, 00:17:36.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.371 "is_configured": false, 00:17:36.371 "data_offset": 0, 00:17:36.371 "data_size": 63488 00:17:36.371 }, 00:17:36.371 { 00:17:36.371 "name": "BaseBdev2", 00:17:36.371 "uuid": "7f5a2b06-38f9-51ba-88e9-01451c574dda", 00:17:36.371 "is_configured": true, 00:17:36.371 "data_offset": 2048, 00:17:36.371 "data_size": 63488 00:17:36.371 } 00:17:36.371 ] 00:17:36.371 }' 00:17:36.371 12:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:36.371 12:16:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.938 12:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:36.938 12:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:36.938 12:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:36.938 12:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:36.938 12:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:36.938 12:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.938 12:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.938 12:16:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.938 12:16:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.938 12:16:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.938 12:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:36.938 "name": "raid_bdev1", 00:17:36.938 "uuid": "46e1d7c2-1e59-45c4-af85-17203dd3c5e6", 00:17:36.938 "strip_size_kb": 0, 00:17:36.938 "state": "online", 00:17:36.938 "raid_level": "raid1", 00:17:36.938 "superblock": true, 00:17:36.938 "num_base_bdevs": 2, 00:17:36.938 "num_base_bdevs_discovered": 1, 00:17:36.938 "num_base_bdevs_operational": 1, 00:17:36.938 "base_bdevs_list": [ 00:17:36.938 { 00:17:36.938 "name": null, 00:17:36.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.938 "is_configured": false, 00:17:36.938 "data_offset": 0, 00:17:36.938 "data_size": 63488 00:17:36.938 }, 00:17:36.938 { 00:17:36.938 "name": "BaseBdev2", 00:17:36.938 "uuid": "7f5a2b06-38f9-51ba-88e9-01451c574dda", 00:17:36.938 "is_configured": true, 00:17:36.938 "data_offset": 2048, 00:17:36.938 "data_size": 63488 00:17:36.938 } 00:17:36.938 ] 00:17:36.938 }' 00:17:36.938 12:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:36.938 12:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:36.938 12:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:36.938 12:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:36.938 12:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:36.938 12:16:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.938 12:16:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.938 [2024-11-25 12:16:32.922417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:36.938 [2024-11-25 12:16:32.940565] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:17:36.938 12:16:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.938 12:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:36.938 [2024-11-25 12:16:32.943688] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:37.874 12:16:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:37.874 12:16:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:37.874 12:16:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:37.874 12:16:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:37.874 12:16:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:37.874 12:16:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.874 12:16:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.874 12:16:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.874 12:16:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.874 12:16:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.133 12:16:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:38.133 "name": "raid_bdev1", 00:17:38.133 "uuid": "46e1d7c2-1e59-45c4-af85-17203dd3c5e6", 00:17:38.133 "strip_size_kb": 0, 00:17:38.133 "state": "online", 00:17:38.133 "raid_level": "raid1", 00:17:38.133 "superblock": true, 00:17:38.133 "num_base_bdevs": 2, 00:17:38.133 "num_base_bdevs_discovered": 2, 00:17:38.133 "num_base_bdevs_operational": 2, 00:17:38.133 "process": { 00:17:38.133 "type": "rebuild", 00:17:38.133 "target": "spare", 00:17:38.133 "progress": { 00:17:38.133 "blocks": 18432, 00:17:38.133 "percent": 29 00:17:38.133 } 00:17:38.133 }, 00:17:38.133 "base_bdevs_list": [ 00:17:38.133 { 00:17:38.133 "name": "spare", 00:17:38.133 "uuid": "34a1c6b6-0259-5f06-bf9c-b3b765b5e4b3", 00:17:38.133 "is_configured": true, 00:17:38.133 "data_offset": 2048, 00:17:38.133 "data_size": 63488 00:17:38.133 }, 00:17:38.133 { 00:17:38.133 "name": "BaseBdev2", 00:17:38.133 "uuid": "7f5a2b06-38f9-51ba-88e9-01451c574dda", 00:17:38.133 "is_configured": true, 00:17:38.133 "data_offset": 2048, 00:17:38.133 "data_size": 63488 00:17:38.133 } 00:17:38.133 ] 00:17:38.133 }' 00:17:38.133 12:16:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:38.133 12:16:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:38.133 12:16:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:38.133 12:16:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:38.133 12:16:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:38.133 12:16:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:38.133 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:38.133 12:16:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:38.133 12:16:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:38.133 12:16:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:38.133 12:16:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=413 00:17:38.133 12:16:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:38.133 12:16:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:38.133 12:16:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:38.133 12:16:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:38.133 12:16:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:38.133 12:16:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:38.133 12:16:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.133 12:16:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.133 12:16:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.133 12:16:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.134 12:16:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.134 12:16:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:38.134 "name": "raid_bdev1", 00:17:38.134 "uuid": "46e1d7c2-1e59-45c4-af85-17203dd3c5e6", 00:17:38.134 "strip_size_kb": 0, 00:17:38.134 "state": "online", 00:17:38.134 "raid_level": "raid1", 00:17:38.134 "superblock": true, 00:17:38.134 "num_base_bdevs": 2, 00:17:38.134 "num_base_bdevs_discovered": 2, 00:17:38.134 "num_base_bdevs_operational": 2, 00:17:38.134 "process": { 00:17:38.134 "type": "rebuild", 00:17:38.134 "target": "spare", 00:17:38.134 "progress": { 00:17:38.134 "blocks": 22528, 00:17:38.134 "percent": 35 00:17:38.134 } 00:17:38.134 }, 00:17:38.134 "base_bdevs_list": [ 00:17:38.134 { 00:17:38.134 "name": "spare", 00:17:38.134 "uuid": "34a1c6b6-0259-5f06-bf9c-b3b765b5e4b3", 00:17:38.134 "is_configured": true, 00:17:38.134 "data_offset": 2048, 00:17:38.134 "data_size": 63488 00:17:38.134 }, 00:17:38.134 { 00:17:38.134 "name": "BaseBdev2", 00:17:38.134 "uuid": "7f5a2b06-38f9-51ba-88e9-01451c574dda", 00:17:38.134 "is_configured": true, 00:17:38.134 "data_offset": 2048, 00:17:38.134 "data_size": 63488 00:17:38.134 } 00:17:38.134 ] 00:17:38.134 }' 00:17:38.134 12:16:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:38.134 12:16:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:38.134 12:16:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:38.393 12:16:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:38.393 12:16:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:39.328 12:16:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:39.328 12:16:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:39.328 12:16:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:39.328 12:16:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:39.328 12:16:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:39.328 12:16:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:39.328 12:16:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.328 12:16:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.328 12:16:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.328 12:16:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.328 12:16:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.328 12:16:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:39.328 "name": "raid_bdev1", 00:17:39.328 "uuid": "46e1d7c2-1e59-45c4-af85-17203dd3c5e6", 00:17:39.328 "strip_size_kb": 0, 00:17:39.328 "state": "online", 00:17:39.328 "raid_level": "raid1", 00:17:39.328 "superblock": true, 00:17:39.328 "num_base_bdevs": 2, 00:17:39.328 "num_base_bdevs_discovered": 2, 00:17:39.328 "num_base_bdevs_operational": 2, 00:17:39.328 "process": { 00:17:39.328 "type": "rebuild", 00:17:39.328 "target": "spare", 00:17:39.328 "progress": { 00:17:39.328 "blocks": 47104, 00:17:39.328 "percent": 74 00:17:39.328 } 00:17:39.328 }, 00:17:39.328 "base_bdevs_list": [ 00:17:39.328 { 00:17:39.328 "name": "spare", 00:17:39.328 "uuid": "34a1c6b6-0259-5f06-bf9c-b3b765b5e4b3", 00:17:39.328 "is_configured": true, 00:17:39.328 "data_offset": 2048, 00:17:39.328 "data_size": 63488 00:17:39.328 }, 00:17:39.328 { 00:17:39.328 "name": "BaseBdev2", 00:17:39.328 "uuid": "7f5a2b06-38f9-51ba-88e9-01451c574dda", 00:17:39.328 "is_configured": true, 00:17:39.328 "data_offset": 2048, 00:17:39.328 "data_size": 63488 00:17:39.328 } 00:17:39.328 ] 00:17:39.328 }' 00:17:39.329 12:16:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:39.329 12:16:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:39.329 12:16:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:39.589 12:16:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:39.589 12:16:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:40.156 [2024-11-25 12:16:36.075770] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:40.156 [2024-11-25 12:16:36.075904] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:40.156 [2024-11-25 12:16:36.076110] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:40.414 12:16:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:40.414 12:16:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:40.414 12:16:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:40.414 12:16:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:40.414 12:16:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:40.414 12:16:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:40.414 12:16:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.415 12:16:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.415 12:16:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.415 12:16:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.415 12:16:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.415 12:16:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:40.415 "name": "raid_bdev1", 00:17:40.415 "uuid": "46e1d7c2-1e59-45c4-af85-17203dd3c5e6", 00:17:40.415 "strip_size_kb": 0, 00:17:40.415 "state": "online", 00:17:40.415 "raid_level": "raid1", 00:17:40.415 "superblock": true, 00:17:40.415 "num_base_bdevs": 2, 00:17:40.415 "num_base_bdevs_discovered": 2, 00:17:40.415 "num_base_bdevs_operational": 2, 00:17:40.415 "base_bdevs_list": [ 00:17:40.415 { 00:17:40.415 "name": "spare", 00:17:40.415 "uuid": "34a1c6b6-0259-5f06-bf9c-b3b765b5e4b3", 00:17:40.415 "is_configured": true, 00:17:40.415 "data_offset": 2048, 00:17:40.415 "data_size": 63488 00:17:40.415 }, 00:17:40.415 { 00:17:40.415 "name": "BaseBdev2", 00:17:40.415 "uuid": "7f5a2b06-38f9-51ba-88e9-01451c574dda", 00:17:40.415 "is_configured": true, 00:17:40.415 "data_offset": 2048, 00:17:40.415 "data_size": 63488 00:17:40.415 } 00:17:40.415 ] 00:17:40.415 }' 00:17:40.415 12:16:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:40.673 12:16:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:40.673 12:16:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:40.673 12:16:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:40.673 12:16:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:17:40.673 12:16:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:40.673 12:16:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:40.673 12:16:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:40.673 12:16:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:40.673 12:16:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:40.674 12:16:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.674 12:16:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.674 12:16:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.674 12:16:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.674 12:16:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.674 12:16:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:40.674 "name": "raid_bdev1", 00:17:40.674 "uuid": "46e1d7c2-1e59-45c4-af85-17203dd3c5e6", 00:17:40.674 "strip_size_kb": 0, 00:17:40.674 "state": "online", 00:17:40.674 "raid_level": "raid1", 00:17:40.674 "superblock": true, 00:17:40.674 "num_base_bdevs": 2, 00:17:40.674 "num_base_bdevs_discovered": 2, 00:17:40.674 "num_base_bdevs_operational": 2, 00:17:40.674 "base_bdevs_list": [ 00:17:40.674 { 00:17:40.674 "name": "spare", 00:17:40.674 "uuid": "34a1c6b6-0259-5f06-bf9c-b3b765b5e4b3", 00:17:40.674 "is_configured": true, 00:17:40.674 "data_offset": 2048, 00:17:40.674 "data_size": 63488 00:17:40.674 }, 00:17:40.674 { 00:17:40.674 "name": "BaseBdev2", 00:17:40.674 "uuid": "7f5a2b06-38f9-51ba-88e9-01451c574dda", 00:17:40.674 "is_configured": true, 00:17:40.674 "data_offset": 2048, 00:17:40.674 "data_size": 63488 00:17:40.674 } 00:17:40.674 ] 00:17:40.674 }' 00:17:40.674 12:16:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:40.674 12:16:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:40.674 12:16:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:40.674 12:16:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:40.674 12:16:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:40.674 12:16:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:40.674 12:16:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:40.674 12:16:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:40.674 12:16:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:40.674 12:16:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:40.674 12:16:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:40.674 12:16:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:40.674 12:16:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:40.674 12:16:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:40.674 12:16:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.674 12:16:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.674 12:16:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.674 12:16:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.932 12:16:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.932 12:16:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:40.932 "name": "raid_bdev1", 00:17:40.932 "uuid": "46e1d7c2-1e59-45c4-af85-17203dd3c5e6", 00:17:40.932 "strip_size_kb": 0, 00:17:40.932 "state": "online", 00:17:40.932 "raid_level": "raid1", 00:17:40.932 "superblock": true, 00:17:40.933 "num_base_bdevs": 2, 00:17:40.933 "num_base_bdevs_discovered": 2, 00:17:40.933 "num_base_bdevs_operational": 2, 00:17:40.933 "base_bdevs_list": [ 00:17:40.933 { 00:17:40.933 "name": "spare", 00:17:40.933 "uuid": "34a1c6b6-0259-5f06-bf9c-b3b765b5e4b3", 00:17:40.933 "is_configured": true, 00:17:40.933 "data_offset": 2048, 00:17:40.933 "data_size": 63488 00:17:40.933 }, 00:17:40.933 { 00:17:40.933 "name": "BaseBdev2", 00:17:40.933 "uuid": "7f5a2b06-38f9-51ba-88e9-01451c574dda", 00:17:40.933 "is_configured": true, 00:17:40.933 "data_offset": 2048, 00:17:40.933 "data_size": 63488 00:17:40.933 } 00:17:40.933 ] 00:17:40.933 }' 00:17:40.933 12:16:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:40.933 12:16:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.500 12:16:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:41.500 12:16:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.500 12:16:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.500 [2024-11-25 12:16:37.287602] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:41.500 [2024-11-25 12:16:37.287955] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:41.500 [2024-11-25 12:16:37.288234] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:41.500 [2024-11-25 12:16:37.288506] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:41.500 [2024-11-25 12:16:37.288540] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:41.500 12:16:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.500 12:16:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.500 12:16:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.500 12:16:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.500 12:16:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:17:41.500 12:16:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.500 12:16:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:41.500 12:16:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:41.500 12:16:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:41.500 12:16:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:41.500 12:16:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:41.500 12:16:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:41.500 12:16:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:41.500 12:16:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:41.500 12:16:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:41.500 12:16:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:41.500 12:16:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:41.500 12:16:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:41.500 12:16:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:41.758 /dev/nbd0 00:17:41.758 12:16:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:41.758 12:16:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:41.758 12:16:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:41.758 12:16:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:41.758 12:16:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:41.758 12:16:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:41.758 12:16:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:41.758 12:16:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:41.758 12:16:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:41.758 12:16:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:41.758 12:16:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:41.758 1+0 records in 00:17:41.758 1+0 records out 00:17:41.758 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000545461 s, 7.5 MB/s 00:17:41.758 12:16:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:41.758 12:16:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:41.758 12:16:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:41.758 12:16:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:41.758 12:16:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:41.758 12:16:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:41.758 12:16:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:41.758 12:16:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:42.016 /dev/nbd1 00:17:42.016 12:16:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:42.016 12:16:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:42.016 12:16:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:42.016 12:16:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:42.016 12:16:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:42.016 12:16:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:42.016 12:16:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:42.016 12:16:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:42.016 12:16:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:42.016 12:16:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:42.016 12:16:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:42.016 1+0 records in 00:17:42.016 1+0 records out 00:17:42.016 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000508473 s, 8.1 MB/s 00:17:42.016 12:16:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:42.016 12:16:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:42.016 12:16:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:42.016 12:16:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:42.016 12:16:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:42.016 12:16:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:42.017 12:16:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:42.017 12:16:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:42.275 12:16:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:42.275 12:16:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:42.276 12:16:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:42.276 12:16:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:42.276 12:16:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:42.276 12:16:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:42.276 12:16:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:42.533 12:16:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:42.533 12:16:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:42.533 12:16:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:42.533 12:16:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:42.533 12:16:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:42.533 12:16:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:42.533 12:16:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:42.533 12:16:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:42.533 12:16:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:42.533 12:16:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:42.792 12:16:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:42.792 12:16:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:42.792 12:16:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:42.792 12:16:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:42.792 12:16:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:42.792 12:16:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:42.792 12:16:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:42.792 12:16:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:42.792 12:16:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:42.792 12:16:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:42.792 12:16:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.792 12:16:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.792 12:16:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.792 12:16:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:42.792 12:16:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.792 12:16:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.792 [2024-11-25 12:16:38.823663] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:42.792 [2024-11-25 12:16:38.823814] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:42.792 [2024-11-25 12:16:38.823874] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:42.792 [2024-11-25 12:16:38.823926] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:42.792 [2024-11-25 12:16:38.827196] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:42.792 [2024-11-25 12:16:38.827614] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:42.792 [2024-11-25 12:16:38.827836] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:42.792 [2024-11-25 12:16:38.827952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:42.792 [2024-11-25 12:16:38.828236] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:42.792 spare 00:17:42.792 12:16:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.792 12:16:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:42.792 12:16:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.792 12:16:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.051 [2024-11-25 12:16:38.928492] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:43.051 [2024-11-25 12:16:38.928610] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:43.051 [2024-11-25 12:16:38.929188] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:17:43.051 [2024-11-25 12:16:38.929562] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:43.051 [2024-11-25 12:16:38.929583] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:43.051 [2024-11-25 12:16:38.929897] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:43.051 12:16:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.051 12:16:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:43.051 12:16:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:43.051 12:16:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:43.051 12:16:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:43.051 12:16:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:43.051 12:16:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:43.051 12:16:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:43.051 12:16:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:43.051 12:16:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:43.051 12:16:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:43.051 12:16:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.051 12:16:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.051 12:16:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.051 12:16:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.051 12:16:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.051 12:16:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:43.051 "name": "raid_bdev1", 00:17:43.051 "uuid": "46e1d7c2-1e59-45c4-af85-17203dd3c5e6", 00:17:43.051 "strip_size_kb": 0, 00:17:43.051 "state": "online", 00:17:43.051 "raid_level": "raid1", 00:17:43.051 "superblock": true, 00:17:43.051 "num_base_bdevs": 2, 00:17:43.051 "num_base_bdevs_discovered": 2, 00:17:43.051 "num_base_bdevs_operational": 2, 00:17:43.051 "base_bdevs_list": [ 00:17:43.051 { 00:17:43.051 "name": "spare", 00:17:43.051 "uuid": "34a1c6b6-0259-5f06-bf9c-b3b765b5e4b3", 00:17:43.051 "is_configured": true, 00:17:43.051 "data_offset": 2048, 00:17:43.051 "data_size": 63488 00:17:43.051 }, 00:17:43.051 { 00:17:43.051 "name": "BaseBdev2", 00:17:43.051 "uuid": "7f5a2b06-38f9-51ba-88e9-01451c574dda", 00:17:43.051 "is_configured": true, 00:17:43.051 "data_offset": 2048, 00:17:43.051 "data_size": 63488 00:17:43.051 } 00:17:43.051 ] 00:17:43.051 }' 00:17:43.051 12:16:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:43.051 12:16:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.618 12:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:43.618 12:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:43.618 12:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:43.618 12:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:43.618 12:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:43.618 12:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.618 12:16:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.618 12:16:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.619 12:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.619 12:16:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.619 12:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:43.619 "name": "raid_bdev1", 00:17:43.619 "uuid": "46e1d7c2-1e59-45c4-af85-17203dd3c5e6", 00:17:43.619 "strip_size_kb": 0, 00:17:43.619 "state": "online", 00:17:43.619 "raid_level": "raid1", 00:17:43.619 "superblock": true, 00:17:43.619 "num_base_bdevs": 2, 00:17:43.619 "num_base_bdevs_discovered": 2, 00:17:43.619 "num_base_bdevs_operational": 2, 00:17:43.619 "base_bdevs_list": [ 00:17:43.619 { 00:17:43.619 "name": "spare", 00:17:43.619 "uuid": "34a1c6b6-0259-5f06-bf9c-b3b765b5e4b3", 00:17:43.619 "is_configured": true, 00:17:43.619 "data_offset": 2048, 00:17:43.619 "data_size": 63488 00:17:43.619 }, 00:17:43.619 { 00:17:43.619 "name": "BaseBdev2", 00:17:43.619 "uuid": "7f5a2b06-38f9-51ba-88e9-01451c574dda", 00:17:43.619 "is_configured": true, 00:17:43.619 "data_offset": 2048, 00:17:43.619 "data_size": 63488 00:17:43.619 } 00:17:43.619 ] 00:17:43.619 }' 00:17:43.619 12:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:43.619 12:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:43.619 12:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:43.619 12:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:43.619 12:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.619 12:16:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.619 12:16:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.619 12:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:43.619 12:16:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.619 12:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:43.619 12:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:43.619 12:16:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.619 12:16:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.619 [2024-11-25 12:16:39.668519] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:43.619 12:16:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.619 12:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:43.619 12:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:43.619 12:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:43.619 12:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:43.619 12:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:43.619 12:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:43.619 12:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:43.619 12:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:43.619 12:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:43.619 12:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:43.619 12:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.619 12:16:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.619 12:16:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.619 12:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.619 12:16:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.878 12:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:43.878 "name": "raid_bdev1", 00:17:43.878 "uuid": "46e1d7c2-1e59-45c4-af85-17203dd3c5e6", 00:17:43.878 "strip_size_kb": 0, 00:17:43.878 "state": "online", 00:17:43.878 "raid_level": "raid1", 00:17:43.878 "superblock": true, 00:17:43.878 "num_base_bdevs": 2, 00:17:43.878 "num_base_bdevs_discovered": 1, 00:17:43.878 "num_base_bdevs_operational": 1, 00:17:43.878 "base_bdevs_list": [ 00:17:43.878 { 00:17:43.878 "name": null, 00:17:43.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.878 "is_configured": false, 00:17:43.878 "data_offset": 0, 00:17:43.878 "data_size": 63488 00:17:43.878 }, 00:17:43.878 { 00:17:43.878 "name": "BaseBdev2", 00:17:43.878 "uuid": "7f5a2b06-38f9-51ba-88e9-01451c574dda", 00:17:43.878 "is_configured": true, 00:17:43.878 "data_offset": 2048, 00:17:43.878 "data_size": 63488 00:17:43.878 } 00:17:43.878 ] 00:17:43.878 }' 00:17:43.878 12:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:43.878 12:16:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.137 12:16:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:44.137 12:16:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.137 12:16:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.137 [2024-11-25 12:16:40.212871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:44.137 [2024-11-25 12:16:40.213242] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:44.137 [2024-11-25 12:16:40.213275] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:44.137 [2024-11-25 12:16:40.213388] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:44.395 [2024-11-25 12:16:40.230127] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:17:44.395 12:16:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.395 12:16:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:44.395 [2024-11-25 12:16:40.232825] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:45.329 12:16:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:45.330 12:16:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:45.330 12:16:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:45.330 12:16:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:45.330 12:16:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:45.330 12:16:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.330 12:16:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.330 12:16:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.330 12:16:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.330 12:16:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.330 12:16:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:45.330 "name": "raid_bdev1", 00:17:45.330 "uuid": "46e1d7c2-1e59-45c4-af85-17203dd3c5e6", 00:17:45.330 "strip_size_kb": 0, 00:17:45.330 "state": "online", 00:17:45.330 "raid_level": "raid1", 00:17:45.330 "superblock": true, 00:17:45.330 "num_base_bdevs": 2, 00:17:45.330 "num_base_bdevs_discovered": 2, 00:17:45.330 "num_base_bdevs_operational": 2, 00:17:45.330 "process": { 00:17:45.330 "type": "rebuild", 00:17:45.330 "target": "spare", 00:17:45.330 "progress": { 00:17:45.330 "blocks": 18432, 00:17:45.330 "percent": 29 00:17:45.330 } 00:17:45.330 }, 00:17:45.330 "base_bdevs_list": [ 00:17:45.330 { 00:17:45.330 "name": "spare", 00:17:45.330 "uuid": "34a1c6b6-0259-5f06-bf9c-b3b765b5e4b3", 00:17:45.330 "is_configured": true, 00:17:45.330 "data_offset": 2048, 00:17:45.330 "data_size": 63488 00:17:45.330 }, 00:17:45.330 { 00:17:45.330 "name": "BaseBdev2", 00:17:45.330 "uuid": "7f5a2b06-38f9-51ba-88e9-01451c574dda", 00:17:45.330 "is_configured": true, 00:17:45.330 "data_offset": 2048, 00:17:45.330 "data_size": 63488 00:17:45.330 } 00:17:45.330 ] 00:17:45.330 }' 00:17:45.330 12:16:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:45.330 12:16:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:45.330 12:16:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:45.330 12:16:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:45.330 12:16:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:45.330 12:16:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.330 12:16:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.330 [2024-11-25 12:16:41.391219] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:45.588 [2024-11-25 12:16:41.445412] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:45.588 [2024-11-25 12:16:41.445502] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:45.588 [2024-11-25 12:16:41.445530] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:45.588 [2024-11-25 12:16:41.445549] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:45.588 12:16:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.588 12:16:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:45.588 12:16:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:45.588 12:16:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:45.588 12:16:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:45.588 12:16:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:45.588 12:16:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:45.588 12:16:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:45.588 12:16:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:45.588 12:16:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:45.588 12:16:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:45.588 12:16:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.588 12:16:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.588 12:16:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.588 12:16:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.588 12:16:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.588 12:16:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:45.588 "name": "raid_bdev1", 00:17:45.588 "uuid": "46e1d7c2-1e59-45c4-af85-17203dd3c5e6", 00:17:45.588 "strip_size_kb": 0, 00:17:45.588 "state": "online", 00:17:45.588 "raid_level": "raid1", 00:17:45.588 "superblock": true, 00:17:45.588 "num_base_bdevs": 2, 00:17:45.588 "num_base_bdevs_discovered": 1, 00:17:45.588 "num_base_bdevs_operational": 1, 00:17:45.588 "base_bdevs_list": [ 00:17:45.588 { 00:17:45.588 "name": null, 00:17:45.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.588 "is_configured": false, 00:17:45.588 "data_offset": 0, 00:17:45.588 "data_size": 63488 00:17:45.588 }, 00:17:45.588 { 00:17:45.588 "name": "BaseBdev2", 00:17:45.588 "uuid": "7f5a2b06-38f9-51ba-88e9-01451c574dda", 00:17:45.588 "is_configured": true, 00:17:45.588 "data_offset": 2048, 00:17:45.588 "data_size": 63488 00:17:45.588 } 00:17:45.588 ] 00:17:45.588 }' 00:17:45.588 12:16:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:45.588 12:16:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.154 12:16:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:46.154 12:16:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.154 12:16:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.154 [2024-11-25 12:16:42.050922] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:46.154 [2024-11-25 12:16:42.051095] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:46.154 [2024-11-25 12:16:42.051150] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:46.154 [2024-11-25 12:16:42.051179] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:46.154 [2024-11-25 12:16:42.051975] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:46.154 [2024-11-25 12:16:42.052034] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:46.154 [2024-11-25 12:16:42.052186] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:46.154 [2024-11-25 12:16:42.052218] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:46.154 [2024-11-25 12:16:42.052235] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:46.154 [2024-11-25 12:16:42.052283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:46.154 [2024-11-25 12:16:42.069964] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:17:46.154 spare 00:17:46.154 12:16:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.154 12:16:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:46.154 [2024-11-25 12:16:42.072856] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:47.090 12:16:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:47.090 12:16:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:47.090 12:16:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:47.090 12:16:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:47.090 12:16:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:47.090 12:16:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.090 12:16:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.090 12:16:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.090 12:16:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.090 12:16:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.090 12:16:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:47.090 "name": "raid_bdev1", 00:17:47.090 "uuid": "46e1d7c2-1e59-45c4-af85-17203dd3c5e6", 00:17:47.090 "strip_size_kb": 0, 00:17:47.090 "state": "online", 00:17:47.090 "raid_level": "raid1", 00:17:47.090 "superblock": true, 00:17:47.090 "num_base_bdevs": 2, 00:17:47.090 "num_base_bdevs_discovered": 2, 00:17:47.090 "num_base_bdevs_operational": 2, 00:17:47.090 "process": { 00:17:47.090 "type": "rebuild", 00:17:47.090 "target": "spare", 00:17:47.090 "progress": { 00:17:47.090 "blocks": 18432, 00:17:47.090 "percent": 29 00:17:47.090 } 00:17:47.090 }, 00:17:47.090 "base_bdevs_list": [ 00:17:47.090 { 00:17:47.090 "name": "spare", 00:17:47.090 "uuid": "34a1c6b6-0259-5f06-bf9c-b3b765b5e4b3", 00:17:47.090 "is_configured": true, 00:17:47.090 "data_offset": 2048, 00:17:47.090 "data_size": 63488 00:17:47.090 }, 00:17:47.090 { 00:17:47.090 "name": "BaseBdev2", 00:17:47.090 "uuid": "7f5a2b06-38f9-51ba-88e9-01451c574dda", 00:17:47.090 "is_configured": true, 00:17:47.090 "data_offset": 2048, 00:17:47.090 "data_size": 63488 00:17:47.090 } 00:17:47.090 ] 00:17:47.090 }' 00:17:47.090 12:16:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:47.348 12:16:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:47.348 12:16:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:47.348 12:16:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:47.348 12:16:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:47.348 12:16:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.348 12:16:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.348 [2024-11-25 12:16:43.246843] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:47.348 [2024-11-25 12:16:43.285414] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:47.348 [2024-11-25 12:16:43.285583] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:47.348 [2024-11-25 12:16:43.285625] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:47.348 [2024-11-25 12:16:43.285642] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:47.348 12:16:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.348 12:16:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:47.348 12:16:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:47.348 12:16:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:47.348 12:16:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:47.348 12:16:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:47.348 12:16:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:47.348 12:16:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:47.348 12:16:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:47.348 12:16:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:47.348 12:16:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:47.348 12:16:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.348 12:16:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.348 12:16:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.348 12:16:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.348 12:16:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.348 12:16:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:47.348 "name": "raid_bdev1", 00:17:47.348 "uuid": "46e1d7c2-1e59-45c4-af85-17203dd3c5e6", 00:17:47.348 "strip_size_kb": 0, 00:17:47.348 "state": "online", 00:17:47.348 "raid_level": "raid1", 00:17:47.348 "superblock": true, 00:17:47.348 "num_base_bdevs": 2, 00:17:47.348 "num_base_bdevs_discovered": 1, 00:17:47.348 "num_base_bdevs_operational": 1, 00:17:47.348 "base_bdevs_list": [ 00:17:47.348 { 00:17:47.348 "name": null, 00:17:47.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.348 "is_configured": false, 00:17:47.348 "data_offset": 0, 00:17:47.348 "data_size": 63488 00:17:47.348 }, 00:17:47.348 { 00:17:47.348 "name": "BaseBdev2", 00:17:47.348 "uuid": "7f5a2b06-38f9-51ba-88e9-01451c574dda", 00:17:47.348 "is_configured": true, 00:17:47.348 "data_offset": 2048, 00:17:47.348 "data_size": 63488 00:17:47.348 } 00:17:47.348 ] 00:17:47.348 }' 00:17:47.348 12:16:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:47.349 12:16:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.916 12:16:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:47.916 12:16:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:47.916 12:16:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:47.916 12:16:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:47.916 12:16:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:47.916 12:16:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.916 12:16:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.916 12:16:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.916 12:16:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.916 12:16:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.916 12:16:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:47.916 "name": "raid_bdev1", 00:17:47.916 "uuid": "46e1d7c2-1e59-45c4-af85-17203dd3c5e6", 00:17:47.916 "strip_size_kb": 0, 00:17:47.916 "state": "online", 00:17:47.916 "raid_level": "raid1", 00:17:47.916 "superblock": true, 00:17:47.916 "num_base_bdevs": 2, 00:17:47.916 "num_base_bdevs_discovered": 1, 00:17:47.916 "num_base_bdevs_operational": 1, 00:17:47.916 "base_bdevs_list": [ 00:17:47.916 { 00:17:47.916 "name": null, 00:17:47.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.916 "is_configured": false, 00:17:47.916 "data_offset": 0, 00:17:47.916 "data_size": 63488 00:17:47.916 }, 00:17:47.916 { 00:17:47.916 "name": "BaseBdev2", 00:17:47.916 "uuid": "7f5a2b06-38f9-51ba-88e9-01451c574dda", 00:17:47.916 "is_configured": true, 00:17:47.916 "data_offset": 2048, 00:17:47.916 "data_size": 63488 00:17:47.916 } 00:17:47.916 ] 00:17:47.916 }' 00:17:47.916 12:16:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:47.916 12:16:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:47.916 12:16:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:48.230 12:16:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:48.230 12:16:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:48.230 12:16:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.230 12:16:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.230 12:16:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.230 12:16:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:48.230 12:16:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.230 12:16:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.230 [2024-11-25 12:16:44.028331] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:48.230 [2024-11-25 12:16:44.028451] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:48.230 [2024-11-25 12:16:44.028510] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:48.230 [2024-11-25 12:16:44.028542] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:48.230 [2024-11-25 12:16:44.029183] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:48.230 [2024-11-25 12:16:44.029230] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:48.230 [2024-11-25 12:16:44.029376] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:48.230 [2024-11-25 12:16:44.029403] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:48.230 [2024-11-25 12:16:44.029421] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:48.230 [2024-11-25 12:16:44.029440] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:48.230 BaseBdev1 00:17:48.230 12:16:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.230 12:16:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:49.196 12:16:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:49.196 12:16:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:49.196 12:16:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:49.196 12:16:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:49.196 12:16:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:49.196 12:16:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:49.196 12:16:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.196 12:16:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.196 12:16:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.196 12:16:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.196 12:16:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.196 12:16:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.196 12:16:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.196 12:16:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.196 12:16:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.196 12:16:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.196 "name": "raid_bdev1", 00:17:49.196 "uuid": "46e1d7c2-1e59-45c4-af85-17203dd3c5e6", 00:17:49.196 "strip_size_kb": 0, 00:17:49.196 "state": "online", 00:17:49.196 "raid_level": "raid1", 00:17:49.196 "superblock": true, 00:17:49.196 "num_base_bdevs": 2, 00:17:49.196 "num_base_bdevs_discovered": 1, 00:17:49.196 "num_base_bdevs_operational": 1, 00:17:49.196 "base_bdevs_list": [ 00:17:49.196 { 00:17:49.196 "name": null, 00:17:49.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.196 "is_configured": false, 00:17:49.196 "data_offset": 0, 00:17:49.196 "data_size": 63488 00:17:49.196 }, 00:17:49.196 { 00:17:49.196 "name": "BaseBdev2", 00:17:49.196 "uuid": "7f5a2b06-38f9-51ba-88e9-01451c574dda", 00:17:49.196 "is_configured": true, 00:17:49.196 "data_offset": 2048, 00:17:49.196 "data_size": 63488 00:17:49.196 } 00:17:49.196 ] 00:17:49.196 }' 00:17:49.196 12:16:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.196 12:16:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.455 12:16:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:49.455 12:16:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:49.455 12:16:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:49.455 12:16:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:49.455 12:16:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:49.455 12:16:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.455 12:16:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.455 12:16:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.455 12:16:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.714 12:16:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.714 12:16:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:49.714 "name": "raid_bdev1", 00:17:49.714 "uuid": "46e1d7c2-1e59-45c4-af85-17203dd3c5e6", 00:17:49.714 "strip_size_kb": 0, 00:17:49.714 "state": "online", 00:17:49.714 "raid_level": "raid1", 00:17:49.714 "superblock": true, 00:17:49.714 "num_base_bdevs": 2, 00:17:49.714 "num_base_bdevs_discovered": 1, 00:17:49.714 "num_base_bdevs_operational": 1, 00:17:49.714 "base_bdevs_list": [ 00:17:49.714 { 00:17:49.714 "name": null, 00:17:49.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.714 "is_configured": false, 00:17:49.714 "data_offset": 0, 00:17:49.714 "data_size": 63488 00:17:49.714 }, 00:17:49.714 { 00:17:49.714 "name": "BaseBdev2", 00:17:49.714 "uuid": "7f5a2b06-38f9-51ba-88e9-01451c574dda", 00:17:49.714 "is_configured": true, 00:17:49.714 "data_offset": 2048, 00:17:49.714 "data_size": 63488 00:17:49.714 } 00:17:49.714 ] 00:17:49.714 }' 00:17:49.714 12:16:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:49.714 12:16:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:49.714 12:16:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:49.714 12:16:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:49.714 12:16:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:49.714 12:16:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:17:49.714 12:16:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:49.714 12:16:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:49.714 12:16:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:49.714 12:16:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:49.714 12:16:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:49.714 12:16:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:49.714 12:16:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.714 12:16:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.714 [2024-11-25 12:16:45.713023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:49.714 [2024-11-25 12:16:45.713243] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:49.714 [2024-11-25 12:16:45.713271] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:49.714 request: 00:17:49.714 { 00:17:49.714 "base_bdev": "BaseBdev1", 00:17:49.714 "raid_bdev": "raid_bdev1", 00:17:49.714 "method": "bdev_raid_add_base_bdev", 00:17:49.714 "req_id": 1 00:17:49.714 } 00:17:49.714 Got JSON-RPC error response 00:17:49.714 response: 00:17:49.714 { 00:17:49.714 "code": -22, 00:17:49.714 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:49.714 } 00:17:49.714 12:16:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:49.714 12:16:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:17:49.714 12:16:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:49.714 12:16:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:49.714 12:16:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:49.715 12:16:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:50.649 12:16:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:50.649 12:16:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:50.649 12:16:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:50.649 12:16:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:50.649 12:16:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:50.649 12:16:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:50.649 12:16:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.649 12:16:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.649 12:16:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.649 12:16:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.649 12:16:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.649 12:16:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.649 12:16:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.649 12:16:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.908 12:16:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.908 12:16:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.908 "name": "raid_bdev1", 00:17:50.908 "uuid": "46e1d7c2-1e59-45c4-af85-17203dd3c5e6", 00:17:50.908 "strip_size_kb": 0, 00:17:50.908 "state": "online", 00:17:50.908 "raid_level": "raid1", 00:17:50.908 "superblock": true, 00:17:50.908 "num_base_bdevs": 2, 00:17:50.908 "num_base_bdevs_discovered": 1, 00:17:50.908 "num_base_bdevs_operational": 1, 00:17:50.908 "base_bdevs_list": [ 00:17:50.908 { 00:17:50.908 "name": null, 00:17:50.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.908 "is_configured": false, 00:17:50.908 "data_offset": 0, 00:17:50.908 "data_size": 63488 00:17:50.908 }, 00:17:50.908 { 00:17:50.908 "name": "BaseBdev2", 00:17:50.908 "uuid": "7f5a2b06-38f9-51ba-88e9-01451c574dda", 00:17:50.908 "is_configured": true, 00:17:50.908 "data_offset": 2048, 00:17:50.908 "data_size": 63488 00:17:50.908 } 00:17:50.908 ] 00:17:50.908 }' 00:17:50.908 12:16:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.908 12:16:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.166 12:16:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:51.166 12:16:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:51.166 12:16:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:51.166 12:16:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:51.166 12:16:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:51.166 12:16:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.166 12:16:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.166 12:16:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.166 12:16:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.166 12:16:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.425 12:16:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:51.425 "name": "raid_bdev1", 00:17:51.425 "uuid": "46e1d7c2-1e59-45c4-af85-17203dd3c5e6", 00:17:51.425 "strip_size_kb": 0, 00:17:51.425 "state": "online", 00:17:51.425 "raid_level": "raid1", 00:17:51.425 "superblock": true, 00:17:51.425 "num_base_bdevs": 2, 00:17:51.425 "num_base_bdevs_discovered": 1, 00:17:51.425 "num_base_bdevs_operational": 1, 00:17:51.425 "base_bdevs_list": [ 00:17:51.425 { 00:17:51.425 "name": null, 00:17:51.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.425 "is_configured": false, 00:17:51.425 "data_offset": 0, 00:17:51.425 "data_size": 63488 00:17:51.425 }, 00:17:51.425 { 00:17:51.425 "name": "BaseBdev2", 00:17:51.425 "uuid": "7f5a2b06-38f9-51ba-88e9-01451c574dda", 00:17:51.425 "is_configured": true, 00:17:51.425 "data_offset": 2048, 00:17:51.425 "data_size": 63488 00:17:51.425 } 00:17:51.425 ] 00:17:51.425 }' 00:17:51.425 12:16:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:51.425 12:16:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:51.425 12:16:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:51.425 12:16:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:51.425 12:16:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75918 00:17:51.425 12:16:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 75918 ']' 00:17:51.425 12:16:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 75918 00:17:51.425 12:16:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:17:51.425 12:16:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:51.425 12:16:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75918 00:17:51.425 12:16:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:51.425 12:16:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:51.425 killing process with pid 75918 00:17:51.425 12:16:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75918' 00:17:51.425 12:16:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 75918 00:17:51.425 Received shutdown signal, test time was about 60.000000 seconds 00:17:51.425 00:17:51.425 Latency(us) 00:17:51.425 [2024-11-25T12:16:47.516Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:51.425 [2024-11-25T12:16:47.516Z] =================================================================================================================== 00:17:51.425 [2024-11-25T12:16:47.516Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:51.425 [2024-11-25 12:16:47.393430] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:51.425 12:16:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 75918 00:17:51.425 [2024-11-25 12:16:47.393590] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:51.425 [2024-11-25 12:16:47.393672] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:51.425 [2024-11-25 12:16:47.393695] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:51.685 [2024-11-25 12:16:47.660819] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:52.620 12:16:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:17:52.620 00:17:52.620 real 0m27.059s 00:17:52.620 user 0m33.420s 00:17:52.620 sys 0m4.034s 00:17:52.620 12:16:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:52.620 ************************************ 00:17:52.620 END TEST raid_rebuild_test_sb 00:17:52.620 ************************************ 00:17:52.620 12:16:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.885 12:16:48 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:17:52.885 12:16:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:52.885 12:16:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:52.885 12:16:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:52.885 ************************************ 00:17:52.886 START TEST raid_rebuild_test_io 00:17:52.886 ************************************ 00:17:52.886 12:16:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:17:52.886 12:16:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:52.886 12:16:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:52.886 12:16:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:17:52.886 12:16:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:17:52.886 12:16:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:52.886 12:16:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:52.886 12:16:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:52.886 12:16:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:52.886 12:16:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:52.886 12:16:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:52.886 12:16:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:52.886 12:16:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:52.886 12:16:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:52.886 12:16:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:52.886 12:16:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:52.886 12:16:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:52.886 12:16:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:52.886 12:16:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:52.886 12:16:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:52.886 12:16:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:52.886 12:16:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:52.886 12:16:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:52.886 12:16:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:17:52.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:52.886 12:16:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76689 00:17:52.886 12:16:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76689 00:17:52.886 12:16:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 76689 ']' 00:17:52.886 12:16:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:52.886 12:16:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:52.886 12:16:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:52.886 12:16:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:52.886 12:16:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:52.886 12:16:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:52.886 [2024-11-25 12:16:48.860551] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:17:52.886 [2024-11-25 12:16:48.860957] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76689 ] 00:17:52.886 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:52.886 Zero copy mechanism will not be used. 00:17:53.144 [2024-11-25 12:16:49.048800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.144 [2024-11-25 12:16:49.182370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.402 [2024-11-25 12:16:49.390988] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:53.402 [2024-11-25 12:16:49.391033] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:53.967 12:16:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:53.967 12:16:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:17:53.967 12:16:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:53.967 12:16:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:53.967 12:16:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.967 12:16:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:53.967 BaseBdev1_malloc 00:17:53.967 12:16:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.967 12:16:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:53.967 12:16:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.967 12:16:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:53.967 [2024-11-25 12:16:49.911496] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:53.967 [2024-11-25 12:16:49.911578] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.967 [2024-11-25 12:16:49.911618] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:53.967 [2024-11-25 12:16:49.911641] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.967 [2024-11-25 12:16:49.914540] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.967 [2024-11-25 12:16:49.914591] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:53.967 BaseBdev1 00:17:53.967 12:16:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.967 12:16:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:53.967 12:16:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:53.967 12:16:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.967 12:16:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:53.967 BaseBdev2_malloc 00:17:53.967 12:16:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.967 12:16:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:53.967 12:16:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.967 12:16:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:53.967 [2024-11-25 12:16:49.964817] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:53.967 [2024-11-25 12:16:49.964907] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.967 [2024-11-25 12:16:49.964935] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:53.967 [2024-11-25 12:16:49.964955] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.967 [2024-11-25 12:16:49.967685] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.967 [2024-11-25 12:16:49.967749] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:53.967 BaseBdev2 00:17:53.967 12:16:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.967 12:16:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:53.967 12:16:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.967 12:16:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:53.967 spare_malloc 00:17:53.967 12:16:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.967 12:16:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:53.967 12:16:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.967 12:16:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:53.967 spare_delay 00:17:53.967 12:16:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.967 12:16:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:53.967 12:16:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.967 12:16:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:53.967 [2024-11-25 12:16:50.036848] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:53.967 [2024-11-25 12:16:50.036958] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.967 [2024-11-25 12:16:50.036987] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:53.967 [2024-11-25 12:16:50.037014] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.968 [2024-11-25 12:16:50.039807] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.968 [2024-11-25 12:16:50.039855] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:53.968 spare 00:17:53.968 12:16:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.968 12:16:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:53.968 12:16:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.968 12:16:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:53.968 [2024-11-25 12:16:50.044937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:53.968 [2024-11-25 12:16:50.047426] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:53.968 [2024-11-25 12:16:50.047548] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:53.968 [2024-11-25 12:16:50.047570] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:17:53.968 [2024-11-25 12:16:50.047897] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:53.968 [2024-11-25 12:16:50.048116] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:53.968 [2024-11-25 12:16:50.048134] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:53.968 [2024-11-25 12:16:50.048325] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:53.968 12:16:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.968 12:16:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:53.968 12:16:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:53.968 12:16:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:53.968 12:16:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:53.968 12:16:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:53.968 12:16:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:53.968 12:16:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:53.968 12:16:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:53.968 12:16:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:53.968 12:16:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:53.968 12:16:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.968 12:16:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.968 12:16:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.968 12:16:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:54.226 12:16:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.226 12:16:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:54.226 "name": "raid_bdev1", 00:17:54.226 "uuid": "f710b1b6-cb0b-4e2e-bc92-bb7fa523dd6a", 00:17:54.226 "strip_size_kb": 0, 00:17:54.226 "state": "online", 00:17:54.226 "raid_level": "raid1", 00:17:54.226 "superblock": false, 00:17:54.226 "num_base_bdevs": 2, 00:17:54.226 "num_base_bdevs_discovered": 2, 00:17:54.226 "num_base_bdevs_operational": 2, 00:17:54.226 "base_bdevs_list": [ 00:17:54.226 { 00:17:54.226 "name": "BaseBdev1", 00:17:54.226 "uuid": "e5ff6fc5-4ac7-529c-8ab0-f064f23a7105", 00:17:54.226 "is_configured": true, 00:17:54.226 "data_offset": 0, 00:17:54.226 "data_size": 65536 00:17:54.226 }, 00:17:54.226 { 00:17:54.226 "name": "BaseBdev2", 00:17:54.226 "uuid": "7d242ec7-2432-535c-9b04-bacac89e670f", 00:17:54.226 "is_configured": true, 00:17:54.226 "data_offset": 0, 00:17:54.226 "data_size": 65536 00:17:54.226 } 00:17:54.226 ] 00:17:54.226 }' 00:17:54.226 12:16:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:54.226 12:16:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:54.488 12:16:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:54.488 12:16:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:54.488 12:16:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.488 12:16:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:54.488 [2024-11-25 12:16:50.565980] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:54.746 12:16:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.746 12:16:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:17:54.746 12:16:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:54.746 12:16:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.746 12:16:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.746 12:16:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:54.746 12:16:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.746 12:16:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:17:54.746 12:16:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:17:54.746 12:16:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:54.746 12:16:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:17:54.746 12:16:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.746 12:16:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:54.746 [2024-11-25 12:16:50.669264] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:54.746 12:16:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.746 12:16:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:54.746 12:16:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:54.746 12:16:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:54.746 12:16:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:54.746 12:16:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:54.746 12:16:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:54.746 12:16:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:54.746 12:16:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:54.746 12:16:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:54.746 12:16:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:54.746 12:16:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.746 12:16:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.746 12:16:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.746 12:16:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:54.746 12:16:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.747 12:16:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:54.747 "name": "raid_bdev1", 00:17:54.747 "uuid": "f710b1b6-cb0b-4e2e-bc92-bb7fa523dd6a", 00:17:54.747 "strip_size_kb": 0, 00:17:54.747 "state": "online", 00:17:54.747 "raid_level": "raid1", 00:17:54.747 "superblock": false, 00:17:54.747 "num_base_bdevs": 2, 00:17:54.747 "num_base_bdevs_discovered": 1, 00:17:54.747 "num_base_bdevs_operational": 1, 00:17:54.747 "base_bdevs_list": [ 00:17:54.747 { 00:17:54.747 "name": null, 00:17:54.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.747 "is_configured": false, 00:17:54.747 "data_offset": 0, 00:17:54.747 "data_size": 65536 00:17:54.747 }, 00:17:54.747 { 00:17:54.747 "name": "BaseBdev2", 00:17:54.747 "uuid": "7d242ec7-2432-535c-9b04-bacac89e670f", 00:17:54.747 "is_configured": true, 00:17:54.747 "data_offset": 0, 00:17:54.747 "data_size": 65536 00:17:54.747 } 00:17:54.747 ] 00:17:54.747 }' 00:17:54.747 12:16:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:54.747 12:16:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:54.747 [2024-11-25 12:16:50.804184] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:54.747 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:54.747 Zero copy mechanism will not be used. 00:17:54.747 Running I/O for 60 seconds... 00:17:55.313 12:16:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:55.313 12:16:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.313 12:16:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:55.313 [2024-11-25 12:16:51.187635] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:55.313 12:16:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.313 12:16:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:55.313 [2024-11-25 12:16:51.250681] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:55.313 [2024-11-25 12:16:51.253942] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:55.313 [2024-11-25 12:16:51.383664] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:55.313 [2024-11-25 12:16:51.384799] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:55.571 [2024-11-25 12:16:51.595660] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:55.571 [2024-11-25 12:16:51.596167] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:55.830 162.00 IOPS, 486.00 MiB/s [2024-11-25T12:16:51.921Z] [2024-11-25 12:16:51.860820] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:17:55.830 [2024-11-25 12:16:51.861749] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:17:56.088 [2024-11-25 12:16:52.008550] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:56.088 [2024-11-25 12:16:52.009457] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:56.346 12:16:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:56.346 12:16:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:56.346 12:16:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:56.346 12:16:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:56.346 12:16:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:56.346 12:16:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.346 12:16:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.346 12:16:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.346 12:16:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:56.346 12:16:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.346 12:16:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:56.346 "name": "raid_bdev1", 00:17:56.346 "uuid": "f710b1b6-cb0b-4e2e-bc92-bb7fa523dd6a", 00:17:56.346 "strip_size_kb": 0, 00:17:56.346 "state": "online", 00:17:56.346 "raid_level": "raid1", 00:17:56.346 "superblock": false, 00:17:56.346 "num_base_bdevs": 2, 00:17:56.346 "num_base_bdevs_discovered": 2, 00:17:56.346 "num_base_bdevs_operational": 2, 00:17:56.346 "process": { 00:17:56.346 "type": "rebuild", 00:17:56.346 "target": "spare", 00:17:56.346 "progress": { 00:17:56.346 "blocks": 12288, 00:17:56.346 "percent": 18 00:17:56.346 } 00:17:56.346 }, 00:17:56.346 "base_bdevs_list": [ 00:17:56.346 { 00:17:56.346 "name": "spare", 00:17:56.346 "uuid": "8f3c9a8f-2d71-504d-8c63-4790d706907c", 00:17:56.346 "is_configured": true, 00:17:56.346 "data_offset": 0, 00:17:56.346 "data_size": 65536 00:17:56.346 }, 00:17:56.346 { 00:17:56.346 "name": "BaseBdev2", 00:17:56.346 "uuid": "7d242ec7-2432-535c-9b04-bacac89e670f", 00:17:56.346 "is_configured": true, 00:17:56.346 "data_offset": 0, 00:17:56.346 "data_size": 65536 00:17:56.346 } 00:17:56.346 ] 00:17:56.346 }' 00:17:56.346 12:16:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:56.346 12:16:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:56.346 12:16:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:56.346 [2024-11-25 12:16:52.344589] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:17:56.346 12:16:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:56.346 12:16:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:56.346 12:16:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.346 12:16:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:56.346 [2024-11-25 12:16:52.395697] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:56.605 [2024-11-25 12:16:52.457583] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:17:56.605 [2024-11-25 12:16:52.458096] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:17:56.605 [2024-11-25 12:16:52.576691] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:56.605 [2024-11-25 12:16:52.587176] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:56.605 [2024-11-25 12:16:52.587568] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:56.605 [2024-11-25 12:16:52.587599] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:56.605 [2024-11-25 12:16:52.643497] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:17:56.605 12:16:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.605 12:16:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:56.605 12:16:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:56.605 12:16:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:56.605 12:16:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:56.605 12:16:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:56.605 12:16:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:56.605 12:16:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.605 12:16:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.605 12:16:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.605 12:16:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.605 12:16:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.605 12:16:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.605 12:16:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.605 12:16:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:56.900 12:16:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.900 12:16:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.900 "name": "raid_bdev1", 00:17:56.900 "uuid": "f710b1b6-cb0b-4e2e-bc92-bb7fa523dd6a", 00:17:56.900 "strip_size_kb": 0, 00:17:56.900 "state": "online", 00:17:56.900 "raid_level": "raid1", 00:17:56.900 "superblock": false, 00:17:56.900 "num_base_bdevs": 2, 00:17:56.900 "num_base_bdevs_discovered": 1, 00:17:56.900 "num_base_bdevs_operational": 1, 00:17:56.900 "base_bdevs_list": [ 00:17:56.900 { 00:17:56.900 "name": null, 00:17:56.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.900 "is_configured": false, 00:17:56.900 "data_offset": 0, 00:17:56.900 "data_size": 65536 00:17:56.900 }, 00:17:56.900 { 00:17:56.900 "name": "BaseBdev2", 00:17:56.900 "uuid": "7d242ec7-2432-535c-9b04-bacac89e670f", 00:17:56.900 "is_configured": true, 00:17:56.900 "data_offset": 0, 00:17:56.900 "data_size": 65536 00:17:56.900 } 00:17:56.900 ] 00:17:56.900 }' 00:17:56.900 12:16:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.900 12:16:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:57.159 125.50 IOPS, 376.50 MiB/s [2024-11-25T12:16:53.250Z] 12:16:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:57.159 12:16:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:57.159 12:16:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:57.160 12:16:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:57.160 12:16:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:57.160 12:16:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.160 12:16:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.160 12:16:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:57.160 12:16:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.160 12:16:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.160 12:16:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:57.160 "name": "raid_bdev1", 00:17:57.160 "uuid": "f710b1b6-cb0b-4e2e-bc92-bb7fa523dd6a", 00:17:57.160 "strip_size_kb": 0, 00:17:57.160 "state": "online", 00:17:57.160 "raid_level": "raid1", 00:17:57.160 "superblock": false, 00:17:57.160 "num_base_bdevs": 2, 00:17:57.160 "num_base_bdevs_discovered": 1, 00:17:57.160 "num_base_bdevs_operational": 1, 00:17:57.160 "base_bdevs_list": [ 00:17:57.160 { 00:17:57.160 "name": null, 00:17:57.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.160 "is_configured": false, 00:17:57.160 "data_offset": 0, 00:17:57.160 "data_size": 65536 00:17:57.160 }, 00:17:57.160 { 00:17:57.160 "name": "BaseBdev2", 00:17:57.160 "uuid": "7d242ec7-2432-535c-9b04-bacac89e670f", 00:17:57.160 "is_configured": true, 00:17:57.160 "data_offset": 0, 00:17:57.160 "data_size": 65536 00:17:57.160 } 00:17:57.160 ] 00:17:57.160 }' 00:17:57.160 12:16:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:57.419 12:16:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:57.419 12:16:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:57.419 12:16:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:57.419 12:16:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:57.419 12:16:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.419 12:16:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:57.419 [2024-11-25 12:16:53.398890] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:57.419 12:16:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.419 12:16:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:57.419 [2024-11-25 12:16:53.473705] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:57.419 [2024-11-25 12:16:53.476713] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:57.678 [2024-11-25 12:16:53.606157] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:57.678 [2024-11-25 12:16:53.606786] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:57.937 144.67 IOPS, 434.00 MiB/s [2024-11-25T12:16:54.028Z] [2024-11-25 12:16:53.829478] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:57.937 [2024-11-25 12:16:53.829988] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:58.196 [2024-11-25 12:16:54.179359] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:17:58.455 [2024-11-25 12:16:54.419518] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:58.455 12:16:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:58.455 12:16:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:58.455 12:16:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:58.455 12:16:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:58.455 12:16:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:58.455 12:16:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.455 12:16:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.455 12:16:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:58.455 12:16:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.455 12:16:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.455 12:16:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:58.455 "name": "raid_bdev1", 00:17:58.455 "uuid": "f710b1b6-cb0b-4e2e-bc92-bb7fa523dd6a", 00:17:58.455 "strip_size_kb": 0, 00:17:58.455 "state": "online", 00:17:58.455 "raid_level": "raid1", 00:17:58.455 "superblock": false, 00:17:58.455 "num_base_bdevs": 2, 00:17:58.455 "num_base_bdevs_discovered": 2, 00:17:58.455 "num_base_bdevs_operational": 2, 00:17:58.455 "process": { 00:17:58.455 "type": "rebuild", 00:17:58.455 "target": "spare", 00:17:58.455 "progress": { 00:17:58.455 "blocks": 10240, 00:17:58.455 "percent": 15 00:17:58.455 } 00:17:58.455 }, 00:17:58.455 "base_bdevs_list": [ 00:17:58.455 { 00:17:58.455 "name": "spare", 00:17:58.455 "uuid": "8f3c9a8f-2d71-504d-8c63-4790d706907c", 00:17:58.455 "is_configured": true, 00:17:58.455 "data_offset": 0, 00:17:58.455 "data_size": 65536 00:17:58.455 }, 00:17:58.455 { 00:17:58.455 "name": "BaseBdev2", 00:17:58.455 "uuid": "7d242ec7-2432-535c-9b04-bacac89e670f", 00:17:58.455 "is_configured": true, 00:17:58.455 "data_offset": 0, 00:17:58.455 "data_size": 65536 00:17:58.455 } 00:17:58.455 ] 00:17:58.455 }' 00:17:58.455 12:16:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:58.715 12:16:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:58.715 12:16:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:58.715 12:16:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:58.715 12:16:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:17:58.715 12:16:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:58.715 12:16:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:58.715 12:16:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:58.715 12:16:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=433 00:17:58.715 12:16:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:58.715 12:16:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:58.715 12:16:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:58.715 12:16:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:58.715 12:16:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:58.715 12:16:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:58.715 12:16:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.715 12:16:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.715 12:16:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.715 12:16:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:58.715 12:16:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.715 12:16:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:58.715 "name": "raid_bdev1", 00:17:58.715 "uuid": "f710b1b6-cb0b-4e2e-bc92-bb7fa523dd6a", 00:17:58.715 "strip_size_kb": 0, 00:17:58.715 "state": "online", 00:17:58.715 "raid_level": "raid1", 00:17:58.715 "superblock": false, 00:17:58.715 "num_base_bdevs": 2, 00:17:58.715 "num_base_bdevs_discovered": 2, 00:17:58.715 "num_base_bdevs_operational": 2, 00:17:58.715 "process": { 00:17:58.715 "type": "rebuild", 00:17:58.715 "target": "spare", 00:17:58.715 "progress": { 00:17:58.715 "blocks": 10240, 00:17:58.715 "percent": 15 00:17:58.715 } 00:17:58.715 }, 00:17:58.715 "base_bdevs_list": [ 00:17:58.715 { 00:17:58.715 "name": "spare", 00:17:58.715 "uuid": "8f3c9a8f-2d71-504d-8c63-4790d706907c", 00:17:58.715 "is_configured": true, 00:17:58.715 "data_offset": 0, 00:17:58.715 "data_size": 65536 00:17:58.715 }, 00:17:58.715 { 00:17:58.715 "name": "BaseBdev2", 00:17:58.715 "uuid": "7d242ec7-2432-535c-9b04-bacac89e670f", 00:17:58.715 "is_configured": true, 00:17:58.715 "data_offset": 0, 00:17:58.715 "data_size": 65536 00:17:58.715 } 00:17:58.715 ] 00:17:58.715 }' 00:17:58.715 12:16:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:58.715 12:16:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:58.715 12:16:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:58.715 [2024-11-25 12:16:54.743090] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:17:58.715 12:16:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:58.715 12:16:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:58.974 130.75 IOPS, 392.25 MiB/s [2024-11-25T12:16:55.065Z] [2024-11-25 12:16:54.878940] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:17:59.232 [2024-11-25 12:16:55.128805] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:17:59.492 [2024-11-25 12:16:55.362666] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:17:59.752 [2024-11-25 12:16:55.731824] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:17:59.752 12:16:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:59.752 12:16:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:59.752 12:16:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:59.752 12:16:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:59.752 12:16:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:59.752 12:16:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:59.752 12:16:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.752 12:16:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.752 12:16:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:59.752 12:16:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.752 12:16:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.752 12:16:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:59.752 "name": "raid_bdev1", 00:17:59.752 "uuid": "f710b1b6-cb0b-4e2e-bc92-bb7fa523dd6a", 00:17:59.752 "strip_size_kb": 0, 00:17:59.752 "state": "online", 00:17:59.752 "raid_level": "raid1", 00:17:59.752 "superblock": false, 00:17:59.752 "num_base_bdevs": 2, 00:17:59.752 "num_base_bdevs_discovered": 2, 00:17:59.752 "num_base_bdevs_operational": 2, 00:17:59.752 "process": { 00:17:59.752 "type": "rebuild", 00:17:59.752 "target": "spare", 00:17:59.752 "progress": { 00:17:59.752 "blocks": 26624, 00:17:59.752 "percent": 40 00:17:59.752 } 00:17:59.752 }, 00:17:59.752 "base_bdevs_list": [ 00:17:59.752 { 00:17:59.752 "name": "spare", 00:17:59.752 "uuid": "8f3c9a8f-2d71-504d-8c63-4790d706907c", 00:17:59.752 "is_configured": true, 00:17:59.752 "data_offset": 0, 00:17:59.752 "data_size": 65536 00:17:59.752 }, 00:17:59.752 { 00:17:59.752 "name": "BaseBdev2", 00:17:59.752 "uuid": "7d242ec7-2432-535c-9b04-bacac89e670f", 00:17:59.752 "is_configured": true, 00:17:59.752 "data_offset": 0, 00:17:59.752 "data_size": 65536 00:17:59.752 } 00:17:59.752 ] 00:17:59.752 }' 00:17:59.752 12:16:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:00.011 113.00 IOPS, 339.00 MiB/s [2024-11-25T12:16:56.102Z] 12:16:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:00.011 12:16:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:00.011 12:16:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:00.011 12:16:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:00.011 [2024-11-25 12:16:55.959404] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:18:00.270 [2024-11-25 12:16:56.342560] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:18:00.529 [2024-11-25 12:16:56.570749] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:18:00.787 [2024-11-25 12:16:56.686105] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:18:01.046 101.50 IOPS, 304.50 MiB/s [2024-11-25T12:16:57.137Z] 12:16:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:01.046 12:16:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:01.046 12:16:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:01.046 12:16:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:01.046 12:16:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:01.046 12:16:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:01.046 12:16:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.046 12:16:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.046 12:16:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.046 12:16:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:01.046 12:16:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.046 12:16:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:01.046 "name": "raid_bdev1", 00:18:01.046 "uuid": "f710b1b6-cb0b-4e2e-bc92-bb7fa523dd6a", 00:18:01.046 "strip_size_kb": 0, 00:18:01.046 "state": "online", 00:18:01.046 "raid_level": "raid1", 00:18:01.046 "superblock": false, 00:18:01.046 "num_base_bdevs": 2, 00:18:01.046 "num_base_bdevs_discovered": 2, 00:18:01.046 "num_base_bdevs_operational": 2, 00:18:01.046 "process": { 00:18:01.046 "type": "rebuild", 00:18:01.046 "target": "spare", 00:18:01.046 "progress": { 00:18:01.046 "blocks": 45056, 00:18:01.046 "percent": 68 00:18:01.046 } 00:18:01.046 }, 00:18:01.046 "base_bdevs_list": [ 00:18:01.046 { 00:18:01.046 "name": "spare", 00:18:01.046 "uuid": "8f3c9a8f-2d71-504d-8c63-4790d706907c", 00:18:01.046 "is_configured": true, 00:18:01.046 "data_offset": 0, 00:18:01.046 "data_size": 65536 00:18:01.046 }, 00:18:01.046 { 00:18:01.046 "name": "BaseBdev2", 00:18:01.046 "uuid": "7d242ec7-2432-535c-9b04-bacac89e670f", 00:18:01.046 "is_configured": true, 00:18:01.046 "data_offset": 0, 00:18:01.046 "data_size": 65536 00:18:01.046 } 00:18:01.046 ] 00:18:01.046 }' 00:18:01.046 12:16:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:01.046 12:16:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:01.046 12:16:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:01.046 12:16:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:01.046 12:16:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:01.305 [2024-11-25 12:16:57.293080] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:18:01.564 [2024-11-25 12:16:57.523820] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:18:02.086 90.86 IOPS, 272.57 MiB/s [2024-11-25T12:16:58.177Z] 12:16:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:02.087 12:16:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:02.087 12:16:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:02.087 12:16:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:02.087 12:16:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:02.087 12:16:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:02.087 12:16:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.087 12:16:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.087 12:16:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:02.087 12:16:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.087 12:16:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.087 12:16:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:02.087 "name": "raid_bdev1", 00:18:02.087 "uuid": "f710b1b6-cb0b-4e2e-bc92-bb7fa523dd6a", 00:18:02.087 "strip_size_kb": 0, 00:18:02.087 "state": "online", 00:18:02.087 "raid_level": "raid1", 00:18:02.087 "superblock": false, 00:18:02.087 "num_base_bdevs": 2, 00:18:02.087 "num_base_bdevs_discovered": 2, 00:18:02.087 "num_base_bdevs_operational": 2, 00:18:02.087 "process": { 00:18:02.087 "type": "rebuild", 00:18:02.087 "target": "spare", 00:18:02.087 "progress": { 00:18:02.087 "blocks": 59392, 00:18:02.087 "percent": 90 00:18:02.087 } 00:18:02.087 }, 00:18:02.087 "base_bdevs_list": [ 00:18:02.087 { 00:18:02.087 "name": "spare", 00:18:02.087 "uuid": "8f3c9a8f-2d71-504d-8c63-4790d706907c", 00:18:02.087 "is_configured": true, 00:18:02.087 "data_offset": 0, 00:18:02.087 "data_size": 65536 00:18:02.087 }, 00:18:02.087 { 00:18:02.087 "name": "BaseBdev2", 00:18:02.087 "uuid": "7d242ec7-2432-535c-9b04-bacac89e670f", 00:18:02.087 "is_configured": true, 00:18:02.087 "data_offset": 0, 00:18:02.087 "data_size": 65536 00:18:02.087 } 00:18:02.087 ] 00:18:02.087 }' 00:18:02.087 12:16:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:02.345 12:16:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:02.345 12:16:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:02.345 12:16:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:02.345 12:16:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:02.345 [2024-11-25 12:16:58.322764] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:02.345 [2024-11-25 12:16:58.422641] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:02.345 [2024-11-25 12:16:58.426317] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:03.171 84.12 IOPS, 252.38 MiB/s [2024-11-25T12:16:59.262Z] 12:16:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:03.171 12:16:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:03.171 12:16:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:03.171 12:16:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:03.171 12:16:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:03.171 12:16:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:03.171 12:16:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.171 12:16:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.171 12:16:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.171 12:16:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:03.430 12:16:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.430 12:16:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:03.430 "name": "raid_bdev1", 00:18:03.430 "uuid": "f710b1b6-cb0b-4e2e-bc92-bb7fa523dd6a", 00:18:03.431 "strip_size_kb": 0, 00:18:03.431 "state": "online", 00:18:03.431 "raid_level": "raid1", 00:18:03.431 "superblock": false, 00:18:03.431 "num_base_bdevs": 2, 00:18:03.431 "num_base_bdevs_discovered": 2, 00:18:03.431 "num_base_bdevs_operational": 2, 00:18:03.431 "base_bdevs_list": [ 00:18:03.431 { 00:18:03.431 "name": "spare", 00:18:03.431 "uuid": "8f3c9a8f-2d71-504d-8c63-4790d706907c", 00:18:03.431 "is_configured": true, 00:18:03.431 "data_offset": 0, 00:18:03.431 "data_size": 65536 00:18:03.431 }, 00:18:03.431 { 00:18:03.431 "name": "BaseBdev2", 00:18:03.431 "uuid": "7d242ec7-2432-535c-9b04-bacac89e670f", 00:18:03.431 "is_configured": true, 00:18:03.431 "data_offset": 0, 00:18:03.431 "data_size": 65536 00:18:03.431 } 00:18:03.431 ] 00:18:03.431 }' 00:18:03.431 12:16:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:03.431 12:16:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:03.431 12:16:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:03.431 12:16:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:03.431 12:16:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:18:03.431 12:16:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:03.431 12:16:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:03.431 12:16:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:03.431 12:16:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:03.431 12:16:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:03.431 12:16:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.431 12:16:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.431 12:16:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:03.431 12:16:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.431 12:16:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.431 12:16:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:03.431 "name": "raid_bdev1", 00:18:03.431 "uuid": "f710b1b6-cb0b-4e2e-bc92-bb7fa523dd6a", 00:18:03.431 "strip_size_kb": 0, 00:18:03.431 "state": "online", 00:18:03.431 "raid_level": "raid1", 00:18:03.431 "superblock": false, 00:18:03.431 "num_base_bdevs": 2, 00:18:03.431 "num_base_bdevs_discovered": 2, 00:18:03.431 "num_base_bdevs_operational": 2, 00:18:03.431 "base_bdevs_list": [ 00:18:03.431 { 00:18:03.431 "name": "spare", 00:18:03.431 "uuid": "8f3c9a8f-2d71-504d-8c63-4790d706907c", 00:18:03.431 "is_configured": true, 00:18:03.431 "data_offset": 0, 00:18:03.431 "data_size": 65536 00:18:03.431 }, 00:18:03.431 { 00:18:03.431 "name": "BaseBdev2", 00:18:03.431 "uuid": "7d242ec7-2432-535c-9b04-bacac89e670f", 00:18:03.431 "is_configured": true, 00:18:03.431 "data_offset": 0, 00:18:03.431 "data_size": 65536 00:18:03.431 } 00:18:03.431 ] 00:18:03.431 }' 00:18:03.431 12:16:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:03.431 12:16:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:03.690 12:16:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:03.690 12:16:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:03.690 12:16:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:03.690 12:16:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:03.690 12:16:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:03.690 12:16:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:03.690 12:16:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:03.690 12:16:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:03.690 12:16:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:03.690 12:16:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:03.690 12:16:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:03.690 12:16:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:03.690 12:16:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.690 12:16:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.690 12:16:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:03.690 12:16:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.690 12:16:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.690 12:16:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:03.690 "name": "raid_bdev1", 00:18:03.690 "uuid": "f710b1b6-cb0b-4e2e-bc92-bb7fa523dd6a", 00:18:03.690 "strip_size_kb": 0, 00:18:03.690 "state": "online", 00:18:03.690 "raid_level": "raid1", 00:18:03.690 "superblock": false, 00:18:03.690 "num_base_bdevs": 2, 00:18:03.690 "num_base_bdevs_discovered": 2, 00:18:03.690 "num_base_bdevs_operational": 2, 00:18:03.690 "base_bdevs_list": [ 00:18:03.690 { 00:18:03.690 "name": "spare", 00:18:03.690 "uuid": "8f3c9a8f-2d71-504d-8c63-4790d706907c", 00:18:03.690 "is_configured": true, 00:18:03.690 "data_offset": 0, 00:18:03.690 "data_size": 65536 00:18:03.690 }, 00:18:03.690 { 00:18:03.690 "name": "BaseBdev2", 00:18:03.690 "uuid": "7d242ec7-2432-535c-9b04-bacac89e670f", 00:18:03.690 "is_configured": true, 00:18:03.690 "data_offset": 0, 00:18:03.690 "data_size": 65536 00:18:03.690 } 00:18:03.690 ] 00:18:03.690 }' 00:18:03.690 12:16:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:03.690 12:16:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:04.208 78.11 IOPS, 234.33 MiB/s [2024-11-25T12:17:00.299Z] 12:17:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:04.208 12:17:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.208 12:17:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:04.208 [2024-11-25 12:17:00.100649] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:04.208 [2024-11-25 12:17:00.100998] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:04.208 00:18:04.208 Latency(us) 00:18:04.208 [2024-11-25T12:17:00.299Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:04.208 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:18:04.208 raid_bdev1 : 9.34 76.26 228.78 0.00 0.00 17492.74 275.55 130595.37 00:18:04.208 [2024-11-25T12:17:00.299Z] =================================================================================================================== 00:18:04.208 [2024-11-25T12:17:00.299Z] Total : 76.26 228.78 0.00 0.00 17492.74 275.55 130595.37 00:18:04.208 [2024-11-25 12:17:00.164963] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:04.208 [2024-11-25 12:17:00.165063] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:04.208 [2024-11-25 12:17:00.165191] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:04.208 [2024-11-25 12:17:00.165212] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:04.208 { 00:18:04.208 "results": [ 00:18:04.208 { 00:18:04.208 "job": "raid_bdev1", 00:18:04.208 "core_mask": "0x1", 00:18:04.208 "workload": "randrw", 00:18:04.208 "percentage": 50, 00:18:04.208 "status": "finished", 00:18:04.208 "queue_depth": 2, 00:18:04.208 "io_size": 3145728, 00:18:04.208 "runtime": 9.336343, 00:18:04.208 "iops": 76.2611227972237, 00:18:04.208 "mibps": 228.7833683916711, 00:18:04.208 "io_failed": 0, 00:18:04.208 "io_timeout": 0, 00:18:04.208 "avg_latency_us": 17492.74279877426, 00:18:04.208 "min_latency_us": 275.5490909090909, 00:18:04.208 "max_latency_us": 130595.37454545454 00:18:04.208 } 00:18:04.208 ], 00:18:04.208 "core_count": 1 00:18:04.208 } 00:18:04.208 12:17:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.208 12:17:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.208 12:17:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:18:04.208 12:17:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.208 12:17:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:04.208 12:17:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.209 12:17:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:04.209 12:17:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:04.209 12:17:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:18:04.209 12:17:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:18:04.209 12:17:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:04.209 12:17:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:18:04.209 12:17:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:04.209 12:17:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:04.209 12:17:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:04.209 12:17:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:18:04.209 12:17:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:04.209 12:17:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:04.209 12:17:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:18:04.467 /dev/nbd0 00:18:04.467 12:17:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:04.467 12:17:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:04.467 12:17:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:04.467 12:17:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:18:04.467 12:17:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:04.467 12:17:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:04.467 12:17:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:04.726 12:17:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:18:04.726 12:17:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:04.726 12:17:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:04.726 12:17:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:04.726 1+0 records in 00:18:04.726 1+0 records out 00:18:04.726 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000610787 s, 6.7 MB/s 00:18:04.726 12:17:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:04.726 12:17:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:18:04.726 12:17:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:04.726 12:17:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:04.726 12:17:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:18:04.726 12:17:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:04.726 12:17:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:04.726 12:17:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:18:04.726 12:17:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:18:04.726 12:17:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:18:04.726 12:17:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:04.726 12:17:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:18:04.726 12:17:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:04.726 12:17:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:18:04.726 12:17:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:04.726 12:17:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:18:04.726 12:17:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:04.726 12:17:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:04.726 12:17:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:18:04.985 /dev/nbd1 00:18:04.985 12:17:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:04.985 12:17:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:04.986 12:17:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:04.986 12:17:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:18:04.986 12:17:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:04.986 12:17:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:04.986 12:17:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:04.986 12:17:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:18:04.986 12:17:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:04.986 12:17:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:04.986 12:17:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:04.986 1+0 records in 00:18:04.986 1+0 records out 00:18:04.986 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000509857 s, 8.0 MB/s 00:18:04.986 12:17:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:04.986 12:17:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:18:04.986 12:17:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:04.986 12:17:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:04.986 12:17:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:18:04.986 12:17:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:04.986 12:17:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:04.986 12:17:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:18:05.244 12:17:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:18:05.244 12:17:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:05.244 12:17:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:18:05.244 12:17:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:05.244 12:17:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:18:05.244 12:17:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:05.244 12:17:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:05.503 12:17:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:05.503 12:17:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:05.503 12:17:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:05.503 12:17:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:05.503 12:17:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:05.503 12:17:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:05.503 12:17:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:18:05.503 12:17:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:18:05.503 12:17:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:05.503 12:17:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:05.503 12:17:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:05.503 12:17:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:05.503 12:17:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:18:05.503 12:17:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:05.503 12:17:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:05.762 12:17:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:05.762 12:17:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:05.762 12:17:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:05.762 12:17:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:05.762 12:17:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:05.762 12:17:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:05.762 12:17:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:18:05.762 12:17:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:18:05.762 12:17:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:18:05.762 12:17:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76689 00:18:05.762 12:17:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 76689 ']' 00:18:05.762 12:17:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 76689 00:18:05.762 12:17:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:18:05.762 12:17:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:05.762 12:17:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76689 00:18:05.762 killing process with pid 76689 00:18:05.762 Received shutdown signal, test time was about 10.989484 seconds 00:18:05.762 00:18:05.762 Latency(us) 00:18:05.762 [2024-11-25T12:17:01.853Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:05.762 [2024-11-25T12:17:01.853Z] =================================================================================================================== 00:18:05.762 [2024-11-25T12:17:01.853Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:05.762 12:17:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:05.762 12:17:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:05.762 12:17:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76689' 00:18:05.762 12:17:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 76689 00:18:05.762 12:17:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 76689 00:18:05.762 [2024-11-25 12:17:01.797748] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:06.020 [2024-11-25 12:17:02.038544] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:07.397 ************************************ 00:18:07.397 END TEST raid_rebuild_test_io 00:18:07.397 ************************************ 00:18:07.397 12:17:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:18:07.397 00:18:07.397 real 0m14.546s 00:18:07.397 user 0m18.796s 00:18:07.397 sys 0m1.475s 00:18:07.397 12:17:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:07.397 12:17:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:07.397 12:17:03 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:18:07.397 12:17:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:07.397 12:17:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:07.397 12:17:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:07.397 ************************************ 00:18:07.397 START TEST raid_rebuild_test_sb_io 00:18:07.397 ************************************ 00:18:07.397 12:17:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:18:07.397 12:17:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:07.397 12:17:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:07.397 12:17:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:07.397 12:17:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:18:07.397 12:17:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:07.397 12:17:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:07.397 12:17:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:07.397 12:17:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:07.397 12:17:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:07.397 12:17:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:07.397 12:17:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:07.397 12:17:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:07.397 12:17:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:07.397 12:17:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:07.397 12:17:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:07.397 12:17:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:07.397 12:17:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:07.397 12:17:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:07.397 12:17:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:07.397 12:17:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:07.397 12:17:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:07.397 12:17:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:07.397 12:17:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:07.397 12:17:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:07.397 12:17:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77095 00:18:07.397 12:17:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77095 00:18:07.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:07.397 12:17:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 77095 ']' 00:18:07.397 12:17:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:07.397 12:17:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:07.397 12:17:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:07.397 12:17:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:07.397 12:17:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:07.397 12:17:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:07.397 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:07.397 Zero copy mechanism will not be used. 00:18:07.397 [2024-11-25 12:17:03.464771] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:18:07.397 [2024-11-25 12:17:03.464944] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77095 ] 00:18:07.718 [2024-11-25 12:17:03.659186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.011 [2024-11-25 12:17:03.855962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:08.270 [2024-11-25 12:17:04.107667] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:08.270 [2024-11-25 12:17:04.107741] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:08.529 12:17:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:08.529 12:17:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:18:08.529 12:17:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:08.529 12:17:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:08.529 12:17:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.529 12:17:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:08.529 BaseBdev1_malloc 00:18:08.529 12:17:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.529 12:17:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:08.529 12:17:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.529 12:17:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:08.529 [2024-11-25 12:17:04.553676] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:08.529 [2024-11-25 12:17:04.553795] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:08.529 [2024-11-25 12:17:04.553855] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:08.529 [2024-11-25 12:17:04.553892] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:08.529 [2024-11-25 12:17:04.558150] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:08.529 [2024-11-25 12:17:04.558570] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:08.529 BaseBdev1 00:18:08.529 12:17:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.529 12:17:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:08.529 12:17:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:08.529 12:17:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.529 12:17:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:08.529 BaseBdev2_malloc 00:18:08.529 12:17:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.529 12:17:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:08.529 12:17:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.529 12:17:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:08.529 [2024-11-25 12:17:04.614906] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:08.529 [2024-11-25 12:17:04.615018] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:08.529 [2024-11-25 12:17:04.615051] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:08.529 [2024-11-25 12:17:04.615074] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:08.529 [2024-11-25 12:17:04.618055] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:08.789 [2024-11-25 12:17:04.618371] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:08.789 BaseBdev2 00:18:08.789 12:17:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.789 12:17:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:08.789 12:17:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.789 12:17:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:08.789 spare_malloc 00:18:08.789 12:17:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.789 12:17:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:08.789 12:17:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.789 12:17:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:08.789 spare_delay 00:18:08.789 12:17:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.789 12:17:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:08.789 12:17:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.789 12:17:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:08.789 [2024-11-25 12:17:04.697609] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:08.789 [2024-11-25 12:17:04.697718] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:08.789 [2024-11-25 12:17:04.697756] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:08.789 [2024-11-25 12:17:04.697779] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:08.789 [2024-11-25 12:17:04.701487] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:08.789 [2024-11-25 12:17:04.701561] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:08.789 spare 00:18:08.789 12:17:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.789 12:17:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:08.789 12:17:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.789 12:17:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:08.789 [2024-11-25 12:17:04.705852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:08.789 [2024-11-25 12:17:04.708849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:08.789 [2024-11-25 12:17:04.709319] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:08.789 [2024-11-25 12:17:04.709377] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:08.789 [2024-11-25 12:17:04.709788] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:08.789 [2024-11-25 12:17:04.710060] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:08.789 [2024-11-25 12:17:04.710092] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:08.789 [2024-11-25 12:17:04.710437] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:08.789 12:17:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.789 12:17:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:08.789 12:17:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:08.789 12:17:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:08.789 12:17:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:08.789 12:17:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:08.789 12:17:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:08.789 12:17:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:08.790 12:17:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:08.790 12:17:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:08.790 12:17:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:08.790 12:17:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.790 12:17:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.790 12:17:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.790 12:17:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:08.790 12:17:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.790 12:17:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:08.790 "name": "raid_bdev1", 00:18:08.790 "uuid": "fc5b706f-6d2b-471d-aafc-3e7862bcbc52", 00:18:08.790 "strip_size_kb": 0, 00:18:08.790 "state": "online", 00:18:08.790 "raid_level": "raid1", 00:18:08.790 "superblock": true, 00:18:08.790 "num_base_bdevs": 2, 00:18:08.790 "num_base_bdevs_discovered": 2, 00:18:08.790 "num_base_bdevs_operational": 2, 00:18:08.790 "base_bdevs_list": [ 00:18:08.790 { 00:18:08.790 "name": "BaseBdev1", 00:18:08.790 "uuid": "ae026672-7800-5dc3-98c3-a3cc2a18d1b1", 00:18:08.790 "is_configured": true, 00:18:08.790 "data_offset": 2048, 00:18:08.790 "data_size": 63488 00:18:08.790 }, 00:18:08.790 { 00:18:08.790 "name": "BaseBdev2", 00:18:08.790 "uuid": "81dfa88b-cb87-5ce7-bbcc-c20f0ebc0ff5", 00:18:08.790 "is_configured": true, 00:18:08.790 "data_offset": 2048, 00:18:08.790 "data_size": 63488 00:18:08.790 } 00:18:08.790 ] 00:18:08.790 }' 00:18:08.790 12:17:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:08.790 12:17:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:09.358 12:17:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:09.358 12:17:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:09.358 12:17:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.358 12:17:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:09.358 [2024-11-25 12:17:05.242917] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:09.358 12:17:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.358 12:17:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:18:09.358 12:17:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.358 12:17:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:09.358 12:17:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.358 12:17:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:09.358 12:17:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.358 12:17:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:18:09.358 12:17:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:18:09.358 12:17:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:09.358 12:17:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.358 12:17:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:09.358 12:17:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:18:09.358 [2024-11-25 12:17:05.350548] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:09.358 12:17:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.358 12:17:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:09.358 12:17:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:09.358 12:17:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:09.358 12:17:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:09.358 12:17:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:09.358 12:17:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:09.358 12:17:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:09.358 12:17:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:09.358 12:17:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:09.358 12:17:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:09.358 12:17:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.358 12:17:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.358 12:17:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.358 12:17:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:09.358 12:17:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.358 12:17:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:09.358 "name": "raid_bdev1", 00:18:09.358 "uuid": "fc5b706f-6d2b-471d-aafc-3e7862bcbc52", 00:18:09.358 "strip_size_kb": 0, 00:18:09.358 "state": "online", 00:18:09.358 "raid_level": "raid1", 00:18:09.358 "superblock": true, 00:18:09.358 "num_base_bdevs": 2, 00:18:09.358 "num_base_bdevs_discovered": 1, 00:18:09.358 "num_base_bdevs_operational": 1, 00:18:09.358 "base_bdevs_list": [ 00:18:09.358 { 00:18:09.358 "name": null, 00:18:09.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.358 "is_configured": false, 00:18:09.358 "data_offset": 0, 00:18:09.358 "data_size": 63488 00:18:09.358 }, 00:18:09.358 { 00:18:09.358 "name": "BaseBdev2", 00:18:09.358 "uuid": "81dfa88b-cb87-5ce7-bbcc-c20f0ebc0ff5", 00:18:09.358 "is_configured": true, 00:18:09.358 "data_offset": 2048, 00:18:09.358 "data_size": 63488 00:18:09.358 } 00:18:09.358 ] 00:18:09.358 }' 00:18:09.358 12:17:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:09.358 12:17:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:09.616 [2024-11-25 12:17:05.479076] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:09.616 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:09.616 Zero copy mechanism will not be used. 00:18:09.616 Running I/O for 60 seconds... 00:18:09.879 12:17:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:09.879 12:17:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.879 12:17:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:09.879 [2024-11-25 12:17:05.893016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:09.879 12:17:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.879 12:17:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:09.879 [2024-11-25 12:17:05.957687] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:09.879 [2024-11-25 12:17:05.960383] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:10.141 [2024-11-25 12:17:06.062809] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:10.141 [2024-11-25 12:17:06.063766] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:10.400 [2024-11-25 12:17:06.300750] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:10.659 152.00 IOPS, 456.00 MiB/s [2024-11-25T12:17:06.751Z] [2024-11-25 12:17:06.649891] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:18:10.919 [2024-11-25 12:17:06.786856] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:18:10.919 12:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:10.919 12:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:10.919 12:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:10.919 12:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:10.919 12:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:10.919 12:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.919 12:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.919 12:17:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.919 12:17:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:10.919 12:17:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.919 12:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:10.919 "name": "raid_bdev1", 00:18:10.919 "uuid": "fc5b706f-6d2b-471d-aafc-3e7862bcbc52", 00:18:10.919 "strip_size_kb": 0, 00:18:10.919 "state": "online", 00:18:10.919 "raid_level": "raid1", 00:18:10.919 "superblock": true, 00:18:10.919 "num_base_bdevs": 2, 00:18:10.919 "num_base_bdevs_discovered": 2, 00:18:10.919 "num_base_bdevs_operational": 2, 00:18:10.919 "process": { 00:18:10.919 "type": "rebuild", 00:18:10.919 "target": "spare", 00:18:10.919 "progress": { 00:18:10.919 "blocks": 12288, 00:18:10.919 "percent": 19 00:18:10.919 } 00:18:10.919 }, 00:18:10.919 "base_bdevs_list": [ 00:18:10.919 { 00:18:10.919 "name": "spare", 00:18:10.919 "uuid": "c23fb3e4-0354-583b-ac47-7117976e7735", 00:18:10.919 "is_configured": true, 00:18:10.919 "data_offset": 2048, 00:18:10.919 "data_size": 63488 00:18:10.919 }, 00:18:10.919 { 00:18:10.919 "name": "BaseBdev2", 00:18:10.919 "uuid": "81dfa88b-cb87-5ce7-bbcc-c20f0ebc0ff5", 00:18:10.919 "is_configured": true, 00:18:10.919 "data_offset": 2048, 00:18:10.919 "data_size": 63488 00:18:10.919 } 00:18:10.919 ] 00:18:10.919 }' 00:18:10.919 12:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:11.179 [2024-11-25 12:17:07.040985] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:18:11.179 12:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:11.179 12:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:11.179 12:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:11.179 12:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:11.179 12:17:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.179 12:17:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:11.179 [2024-11-25 12:17:07.109101] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:11.483 [2024-11-25 12:17:07.269244] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:11.483 [2024-11-25 12:17:07.272390] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:11.483 [2024-11-25 12:17:07.272431] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:11.483 [2024-11-25 12:17:07.272451] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:11.483 [2024-11-25 12:17:07.316146] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:18:11.483 12:17:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.483 12:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:11.483 12:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:11.483 12:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:11.483 12:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:11.483 12:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:11.483 12:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:11.483 12:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:11.483 12:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:11.483 12:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:11.483 12:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:11.483 12:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.483 12:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.483 12:17:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.483 12:17:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:11.483 12:17:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.483 12:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:11.483 "name": "raid_bdev1", 00:18:11.483 "uuid": "fc5b706f-6d2b-471d-aafc-3e7862bcbc52", 00:18:11.483 "strip_size_kb": 0, 00:18:11.483 "state": "online", 00:18:11.483 "raid_level": "raid1", 00:18:11.483 "superblock": true, 00:18:11.483 "num_base_bdevs": 2, 00:18:11.483 "num_base_bdevs_discovered": 1, 00:18:11.483 "num_base_bdevs_operational": 1, 00:18:11.483 "base_bdevs_list": [ 00:18:11.483 { 00:18:11.483 "name": null, 00:18:11.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.483 "is_configured": false, 00:18:11.483 "data_offset": 0, 00:18:11.483 "data_size": 63488 00:18:11.483 }, 00:18:11.483 { 00:18:11.483 "name": "BaseBdev2", 00:18:11.483 "uuid": "81dfa88b-cb87-5ce7-bbcc-c20f0ebc0ff5", 00:18:11.483 "is_configured": true, 00:18:11.483 "data_offset": 2048, 00:18:11.483 "data_size": 63488 00:18:11.483 } 00:18:11.483 ] 00:18:11.483 }' 00:18:11.483 12:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:11.483 12:17:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:12.052 118.50 IOPS, 355.50 MiB/s [2024-11-25T12:17:08.143Z] 12:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:12.052 12:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:12.052 12:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:12.052 12:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:12.052 12:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:12.052 12:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.052 12:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.052 12:17:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.052 12:17:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:12.052 12:17:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.052 12:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:12.052 "name": "raid_bdev1", 00:18:12.052 "uuid": "fc5b706f-6d2b-471d-aafc-3e7862bcbc52", 00:18:12.052 "strip_size_kb": 0, 00:18:12.052 "state": "online", 00:18:12.052 "raid_level": "raid1", 00:18:12.052 "superblock": true, 00:18:12.052 "num_base_bdevs": 2, 00:18:12.052 "num_base_bdevs_discovered": 1, 00:18:12.052 "num_base_bdevs_operational": 1, 00:18:12.052 "base_bdevs_list": [ 00:18:12.052 { 00:18:12.052 "name": null, 00:18:12.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.052 "is_configured": false, 00:18:12.052 "data_offset": 0, 00:18:12.052 "data_size": 63488 00:18:12.052 }, 00:18:12.052 { 00:18:12.052 "name": "BaseBdev2", 00:18:12.052 "uuid": "81dfa88b-cb87-5ce7-bbcc-c20f0ebc0ff5", 00:18:12.052 "is_configured": true, 00:18:12.052 "data_offset": 2048, 00:18:12.052 "data_size": 63488 00:18:12.052 } 00:18:12.052 ] 00:18:12.052 }' 00:18:12.052 12:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:12.052 12:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:12.052 12:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:12.052 12:17:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:12.052 12:17:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:12.052 12:17:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.052 12:17:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:12.052 [2024-11-25 12:17:08.024650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:12.052 12:17:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.052 12:17:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:12.052 [2024-11-25 12:17:08.111283] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:18:12.052 [2024-11-25 12:17:08.113959] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:12.311 [2024-11-25 12:17:08.232209] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:12.311 [2024-11-25 12:17:08.232993] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:12.571 [2024-11-25 12:17:08.443722] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:12.571 [2024-11-25 12:17:08.444262] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:12.830 132.33 IOPS, 397.00 MiB/s [2024-11-25T12:17:08.921Z] [2024-11-25 12:17:08.889838] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:18:13.088 12:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:13.088 12:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:13.088 12:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:13.088 12:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:13.088 12:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:13.088 12:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.088 12:17:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.088 12:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.088 12:17:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:13.088 12:17:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.088 12:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:13.088 "name": "raid_bdev1", 00:18:13.088 "uuid": "fc5b706f-6d2b-471d-aafc-3e7862bcbc52", 00:18:13.088 "strip_size_kb": 0, 00:18:13.088 "state": "online", 00:18:13.088 "raid_level": "raid1", 00:18:13.088 "superblock": true, 00:18:13.088 "num_base_bdevs": 2, 00:18:13.088 "num_base_bdevs_discovered": 2, 00:18:13.088 "num_base_bdevs_operational": 2, 00:18:13.088 "process": { 00:18:13.088 "type": "rebuild", 00:18:13.088 "target": "spare", 00:18:13.088 "progress": { 00:18:13.088 "blocks": 10240, 00:18:13.088 "percent": 16 00:18:13.088 } 00:18:13.088 }, 00:18:13.088 "base_bdevs_list": [ 00:18:13.088 { 00:18:13.088 "name": "spare", 00:18:13.088 "uuid": "c23fb3e4-0354-583b-ac47-7117976e7735", 00:18:13.088 "is_configured": true, 00:18:13.088 "data_offset": 2048, 00:18:13.088 "data_size": 63488 00:18:13.088 }, 00:18:13.088 { 00:18:13.088 "name": "BaseBdev2", 00:18:13.088 "uuid": "81dfa88b-cb87-5ce7-bbcc-c20f0ebc0ff5", 00:18:13.088 "is_configured": true, 00:18:13.088 "data_offset": 2048, 00:18:13.088 "data_size": 63488 00:18:13.088 } 00:18:13.088 ] 00:18:13.088 }' 00:18:13.088 12:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:13.350 12:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:13.350 12:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:13.350 12:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:13.350 12:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:13.350 12:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:13.351 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:13.351 12:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:13.351 12:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:13.351 12:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:13.351 12:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=448 00:18:13.351 12:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:13.351 12:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:13.351 12:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:13.351 12:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:13.351 12:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:13.351 12:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:13.351 12:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.351 12:17:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.351 12:17:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:13.351 12:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.351 12:17:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.351 [2024-11-25 12:17:09.260633] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:18:13.351 12:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:13.351 "name": "raid_bdev1", 00:18:13.351 "uuid": "fc5b706f-6d2b-471d-aafc-3e7862bcbc52", 00:18:13.351 "strip_size_kb": 0, 00:18:13.351 "state": "online", 00:18:13.351 "raid_level": "raid1", 00:18:13.351 "superblock": true, 00:18:13.351 "num_base_bdevs": 2, 00:18:13.351 "num_base_bdevs_discovered": 2, 00:18:13.351 "num_base_bdevs_operational": 2, 00:18:13.351 "process": { 00:18:13.351 "type": "rebuild", 00:18:13.351 "target": "spare", 00:18:13.351 "progress": { 00:18:13.351 "blocks": 12288, 00:18:13.351 "percent": 19 00:18:13.351 } 00:18:13.351 }, 00:18:13.351 "base_bdevs_list": [ 00:18:13.351 { 00:18:13.351 "name": "spare", 00:18:13.351 "uuid": "c23fb3e4-0354-583b-ac47-7117976e7735", 00:18:13.351 "is_configured": true, 00:18:13.351 "data_offset": 2048, 00:18:13.351 "data_size": 63488 00:18:13.351 }, 00:18:13.351 { 00:18:13.351 "name": "BaseBdev2", 00:18:13.351 "uuid": "81dfa88b-cb87-5ce7-bbcc-c20f0ebc0ff5", 00:18:13.351 "is_configured": true, 00:18:13.351 "data_offset": 2048, 00:18:13.351 "data_size": 63488 00:18:13.351 } 00:18:13.351 ] 00:18:13.351 }' 00:18:13.351 12:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:13.351 12:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:13.351 12:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:13.351 [2024-11-25 12:17:09.362952] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:18:13.351 [2024-11-25 12:17:09.363366] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:18:13.351 12:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:13.351 12:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:13.608 126.75 IOPS, 380.25 MiB/s [2024-11-25T12:17:09.699Z] [2024-11-25 12:17:09.576737] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:18:13.608 [2024-11-25 12:17:09.577519] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:18:13.867 [2024-11-25 12:17:09.797620] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:18:13.867 [2024-11-25 12:17:09.798026] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:18:14.126 [2024-11-25 12:17:10.047027] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:18:14.385 12:17:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:14.385 12:17:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:14.385 12:17:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:14.385 12:17:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:14.385 12:17:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:14.385 12:17:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:14.385 12:17:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.385 12:17:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.385 12:17:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.385 12:17:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:14.385 12:17:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.385 12:17:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:14.385 "name": "raid_bdev1", 00:18:14.385 "uuid": "fc5b706f-6d2b-471d-aafc-3e7862bcbc52", 00:18:14.385 "strip_size_kb": 0, 00:18:14.385 "state": "online", 00:18:14.385 "raid_level": "raid1", 00:18:14.385 "superblock": true, 00:18:14.385 "num_base_bdevs": 2, 00:18:14.385 "num_base_bdevs_discovered": 2, 00:18:14.385 "num_base_bdevs_operational": 2, 00:18:14.385 "process": { 00:18:14.385 "type": "rebuild", 00:18:14.385 "target": "spare", 00:18:14.385 "progress": { 00:18:14.385 "blocks": 30720, 00:18:14.385 "percent": 48 00:18:14.385 } 00:18:14.385 }, 00:18:14.385 "base_bdevs_list": [ 00:18:14.385 { 00:18:14.385 "name": "spare", 00:18:14.385 "uuid": "c23fb3e4-0354-583b-ac47-7117976e7735", 00:18:14.385 "is_configured": true, 00:18:14.385 "data_offset": 2048, 00:18:14.385 "data_size": 63488 00:18:14.385 }, 00:18:14.385 { 00:18:14.385 "name": "BaseBdev2", 00:18:14.385 "uuid": "81dfa88b-cb87-5ce7-bbcc-c20f0ebc0ff5", 00:18:14.385 "is_configured": true, 00:18:14.385 "data_offset": 2048, 00:18:14.385 "data_size": 63488 00:18:14.385 } 00:18:14.385 ] 00:18:14.385 }' 00:18:14.385 12:17:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:14.644 12:17:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:14.644 12:17:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:14.644 115.00 IOPS, 345.00 MiB/s [2024-11-25T12:17:10.735Z] 12:17:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:14.644 12:17:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:14.644 [2024-11-25 12:17:10.590583] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:18:14.903 [2024-11-25 12:17:10.941875] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:18:15.475 [2024-11-25 12:17:11.429718] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:18:15.475 103.67 IOPS, 311.00 MiB/s [2024-11-25T12:17:11.566Z] 12:17:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:15.475 12:17:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:15.475 12:17:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:15.475 12:17:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:15.475 12:17:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:15.475 12:17:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:15.735 12:17:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.735 12:17:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.735 12:17:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.735 12:17:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:15.735 12:17:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.735 12:17:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:15.735 "name": "raid_bdev1", 00:18:15.735 "uuid": "fc5b706f-6d2b-471d-aafc-3e7862bcbc52", 00:18:15.735 "strip_size_kb": 0, 00:18:15.735 "state": "online", 00:18:15.735 "raid_level": "raid1", 00:18:15.735 "superblock": true, 00:18:15.735 "num_base_bdevs": 2, 00:18:15.735 "num_base_bdevs_discovered": 2, 00:18:15.735 "num_base_bdevs_operational": 2, 00:18:15.735 "process": { 00:18:15.735 "type": "rebuild", 00:18:15.735 "target": "spare", 00:18:15.735 "progress": { 00:18:15.735 "blocks": 47104, 00:18:15.735 "percent": 74 00:18:15.735 } 00:18:15.735 }, 00:18:15.735 "base_bdevs_list": [ 00:18:15.735 { 00:18:15.735 "name": "spare", 00:18:15.735 "uuid": "c23fb3e4-0354-583b-ac47-7117976e7735", 00:18:15.735 "is_configured": true, 00:18:15.735 "data_offset": 2048, 00:18:15.735 "data_size": 63488 00:18:15.735 }, 00:18:15.735 { 00:18:15.735 "name": "BaseBdev2", 00:18:15.735 "uuid": "81dfa88b-cb87-5ce7-bbcc-c20f0ebc0ff5", 00:18:15.735 "is_configured": true, 00:18:15.735 "data_offset": 2048, 00:18:15.735 "data_size": 63488 00:18:15.735 } 00:18:15.735 ] 00:18:15.735 }' 00:18:15.735 12:17:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:15.735 12:17:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:15.735 12:17:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:15.735 12:17:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:15.735 12:17:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:16.673 [2024-11-25 12:17:12.425559] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:16.673 94.14 IOPS, 282.43 MiB/s [2024-11-25T12:17:12.764Z] [2024-11-25 12:17:12.525551] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:16.673 [2024-11-25 12:17:12.528360] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:16.673 12:17:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:16.673 12:17:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:16.673 12:17:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:16.673 12:17:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:16.673 12:17:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:16.673 12:17:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:16.673 12:17:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.673 12:17:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.673 12:17:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.673 12:17:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:16.673 12:17:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.933 12:17:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:16.933 "name": "raid_bdev1", 00:18:16.933 "uuid": "fc5b706f-6d2b-471d-aafc-3e7862bcbc52", 00:18:16.933 "strip_size_kb": 0, 00:18:16.933 "state": "online", 00:18:16.933 "raid_level": "raid1", 00:18:16.933 "superblock": true, 00:18:16.933 "num_base_bdevs": 2, 00:18:16.933 "num_base_bdevs_discovered": 2, 00:18:16.933 "num_base_bdevs_operational": 2, 00:18:16.933 "base_bdevs_list": [ 00:18:16.933 { 00:18:16.933 "name": "spare", 00:18:16.933 "uuid": "c23fb3e4-0354-583b-ac47-7117976e7735", 00:18:16.933 "is_configured": true, 00:18:16.933 "data_offset": 2048, 00:18:16.933 "data_size": 63488 00:18:16.933 }, 00:18:16.933 { 00:18:16.933 "name": "BaseBdev2", 00:18:16.933 "uuid": "81dfa88b-cb87-5ce7-bbcc-c20f0ebc0ff5", 00:18:16.933 "is_configured": true, 00:18:16.933 "data_offset": 2048, 00:18:16.933 "data_size": 63488 00:18:16.933 } 00:18:16.933 ] 00:18:16.933 }' 00:18:16.933 12:17:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:16.933 12:17:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:16.933 12:17:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:16.933 12:17:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:16.933 12:17:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:18:16.933 12:17:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:16.933 12:17:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:16.933 12:17:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:16.933 12:17:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:16.933 12:17:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:16.933 12:17:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.933 12:17:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.933 12:17:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.933 12:17:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:16.933 12:17:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.933 12:17:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:16.933 "name": "raid_bdev1", 00:18:16.933 "uuid": "fc5b706f-6d2b-471d-aafc-3e7862bcbc52", 00:18:16.933 "strip_size_kb": 0, 00:18:16.933 "state": "online", 00:18:16.933 "raid_level": "raid1", 00:18:16.933 "superblock": true, 00:18:16.933 "num_base_bdevs": 2, 00:18:16.933 "num_base_bdevs_discovered": 2, 00:18:16.933 "num_base_bdevs_operational": 2, 00:18:16.933 "base_bdevs_list": [ 00:18:16.933 { 00:18:16.933 "name": "spare", 00:18:16.933 "uuid": "c23fb3e4-0354-583b-ac47-7117976e7735", 00:18:16.933 "is_configured": true, 00:18:16.933 "data_offset": 2048, 00:18:16.933 "data_size": 63488 00:18:16.933 }, 00:18:16.933 { 00:18:16.933 "name": "BaseBdev2", 00:18:16.933 "uuid": "81dfa88b-cb87-5ce7-bbcc-c20f0ebc0ff5", 00:18:16.933 "is_configured": true, 00:18:16.933 "data_offset": 2048, 00:18:16.933 "data_size": 63488 00:18:16.933 } 00:18:16.933 ] 00:18:16.933 }' 00:18:16.933 12:17:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:16.933 12:17:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:16.933 12:17:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:17.192 12:17:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:17.192 12:17:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:17.192 12:17:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:17.192 12:17:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:17.192 12:17:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:17.192 12:17:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:17.192 12:17:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:17.192 12:17:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:17.192 12:17:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:17.192 12:17:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:17.192 12:17:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:17.192 12:17:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.192 12:17:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.192 12:17:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.192 12:17:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:17.192 12:17:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.192 12:17:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:17.192 "name": "raid_bdev1", 00:18:17.192 "uuid": "fc5b706f-6d2b-471d-aafc-3e7862bcbc52", 00:18:17.192 "strip_size_kb": 0, 00:18:17.192 "state": "online", 00:18:17.192 "raid_level": "raid1", 00:18:17.192 "superblock": true, 00:18:17.192 "num_base_bdevs": 2, 00:18:17.192 "num_base_bdevs_discovered": 2, 00:18:17.192 "num_base_bdevs_operational": 2, 00:18:17.192 "base_bdevs_list": [ 00:18:17.192 { 00:18:17.192 "name": "spare", 00:18:17.192 "uuid": "c23fb3e4-0354-583b-ac47-7117976e7735", 00:18:17.192 "is_configured": true, 00:18:17.192 "data_offset": 2048, 00:18:17.192 "data_size": 63488 00:18:17.192 }, 00:18:17.193 { 00:18:17.193 "name": "BaseBdev2", 00:18:17.193 "uuid": "81dfa88b-cb87-5ce7-bbcc-c20f0ebc0ff5", 00:18:17.193 "is_configured": true, 00:18:17.193 "data_offset": 2048, 00:18:17.193 "data_size": 63488 00:18:17.193 } 00:18:17.193 ] 00:18:17.193 }' 00:18:17.193 12:17:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:17.193 12:17:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:17.711 86.75 IOPS, 260.25 MiB/s [2024-11-25T12:17:13.802Z] 12:17:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:17.711 12:17:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.711 12:17:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:17.711 [2024-11-25 12:17:13.581669] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:17.711 [2024-11-25 12:17:13.581706] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:17.711 00:18:17.711 Latency(us) 00:18:17.711 [2024-11-25T12:17:13.802Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:17.711 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:18:17.711 raid_bdev1 : 8.15 85.55 256.64 0.00 0.00 15958.51 275.55 111530.36 00:18:17.711 [2024-11-25T12:17:13.802Z] =================================================================================================================== 00:18:17.711 [2024-11-25T12:17:13.802Z] Total : 85.55 256.64 0.00 0.00 15958.51 275.55 111530.36 00:18:17.711 { 00:18:17.711 "results": [ 00:18:17.711 { 00:18:17.711 "job": "raid_bdev1", 00:18:17.711 "core_mask": "0x1", 00:18:17.711 "workload": "randrw", 00:18:17.711 "percentage": 50, 00:18:17.711 "status": "finished", 00:18:17.711 "queue_depth": 2, 00:18:17.711 "io_size": 3145728, 00:18:17.711 "runtime": 8.147499, 00:18:17.711 "iops": 85.54772452258048, 00:18:17.711 "mibps": 256.6431735677414, 00:18:17.711 "io_failed": 0, 00:18:17.711 "io_timeout": 0, 00:18:17.711 "avg_latency_us": 15958.506430155212, 00:18:17.711 "min_latency_us": 275.5490909090909, 00:18:17.711 "max_latency_us": 111530.35636363637 00:18:17.711 } 00:18:17.711 ], 00:18:17.711 "core_count": 1 00:18:17.711 } 00:18:17.711 [2024-11-25 12:17:13.649577] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:17.711 [2024-11-25 12:17:13.649643] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:17.711 [2024-11-25 12:17:13.649762] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:17.711 [2024-11-25 12:17:13.649780] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:17.711 12:17:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.711 12:17:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.711 12:17:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.711 12:17:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:18:17.711 12:17:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:17.711 12:17:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.711 12:17:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:17.711 12:17:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:17.711 12:17:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:18:17.711 12:17:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:18:17.711 12:17:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:17.711 12:17:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:18:17.711 12:17:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:17.711 12:17:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:17.711 12:17:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:17.711 12:17:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:18:17.711 12:17:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:17.711 12:17:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:17.711 12:17:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:18:17.970 /dev/nbd0 00:18:17.970 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:17.970 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:17.970 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:17.970 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:18:17.970 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:17.970 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:17.970 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:17.970 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:18:17.970 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:17.970 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:17.970 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:17.970 1+0 records in 00:18:17.970 1+0 records out 00:18:17.970 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000434485 s, 9.4 MB/s 00:18:18.230 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:18.230 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:18:18.230 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:18.230 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:18.230 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:18:18.230 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:18.230 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:18.230 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:18:18.230 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:18:18.230 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:18:18.230 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:18.230 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:18:18.230 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:18.230 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:18:18.230 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:18.230 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:18:18.230 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:18.230 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:18.230 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:18:18.489 /dev/nbd1 00:18:18.489 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:18.489 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:18.489 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:18.489 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:18:18.489 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:18.489 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:18.489 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:18.489 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:18:18.489 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:18.489 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:18.489 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:18.489 1+0 records in 00:18:18.489 1+0 records out 00:18:18.489 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000410681 s, 10.0 MB/s 00:18:18.489 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:18.489 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:18:18.489 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:18.489 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:18.489 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:18:18.489 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:18.489 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:18.489 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:18.489 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:18:18.489 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:18.489 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:18:18.489 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:18.489 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:18:18.489 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:18.489 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:19.056 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:19.056 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:19.056 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:19.056 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:19.056 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:19.056 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:19.056 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:18:19.057 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:18:19.057 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:19.057 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:19.057 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:19.057 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:19.057 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:18:19.057 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:19.057 12:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:19.057 12:17:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:19.057 12:17:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:19.057 12:17:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:19.057 12:17:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:19.057 12:17:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:19.057 12:17:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:19.057 12:17:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:18:19.057 12:17:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:18:19.057 12:17:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:19.057 12:17:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:19.057 12:17:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.057 12:17:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:19.057 12:17:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.057 12:17:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:19.057 12:17:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.057 12:17:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:19.057 [2024-11-25 12:17:15.144220] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:19.057 [2024-11-25 12:17:15.144296] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:19.057 [2024-11-25 12:17:15.144332] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:18:19.057 [2024-11-25 12:17:15.144362] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:19.315 [2024-11-25 12:17:15.147274] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:19.315 [2024-11-25 12:17:15.147320] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:19.315 [2024-11-25 12:17:15.147453] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:19.316 [2024-11-25 12:17:15.147530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:19.316 [2024-11-25 12:17:15.147705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:19.316 spare 00:18:19.316 12:17:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.316 12:17:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:19.316 12:17:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.316 12:17:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:19.316 [2024-11-25 12:17:15.247837] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:19.316 [2024-11-25 12:17:15.247927] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:19.316 [2024-11-25 12:17:15.248410] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:18:19.316 [2024-11-25 12:17:15.248686] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:19.316 [2024-11-25 12:17:15.248723] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:19.316 [2024-11-25 12:17:15.249008] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:19.316 12:17:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.316 12:17:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:19.316 12:17:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:19.316 12:17:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:19.316 12:17:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:19.316 12:17:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:19.316 12:17:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:19.316 12:17:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:19.316 12:17:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:19.316 12:17:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:19.316 12:17:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:19.316 12:17:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.316 12:17:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.316 12:17:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:19.316 12:17:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.316 12:17:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.316 12:17:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:19.316 "name": "raid_bdev1", 00:18:19.316 "uuid": "fc5b706f-6d2b-471d-aafc-3e7862bcbc52", 00:18:19.316 "strip_size_kb": 0, 00:18:19.316 "state": "online", 00:18:19.316 "raid_level": "raid1", 00:18:19.316 "superblock": true, 00:18:19.316 "num_base_bdevs": 2, 00:18:19.316 "num_base_bdevs_discovered": 2, 00:18:19.316 "num_base_bdevs_operational": 2, 00:18:19.316 "base_bdevs_list": [ 00:18:19.316 { 00:18:19.316 "name": "spare", 00:18:19.316 "uuid": "c23fb3e4-0354-583b-ac47-7117976e7735", 00:18:19.316 "is_configured": true, 00:18:19.316 "data_offset": 2048, 00:18:19.316 "data_size": 63488 00:18:19.316 }, 00:18:19.316 { 00:18:19.316 "name": "BaseBdev2", 00:18:19.316 "uuid": "81dfa88b-cb87-5ce7-bbcc-c20f0ebc0ff5", 00:18:19.316 "is_configured": true, 00:18:19.316 "data_offset": 2048, 00:18:19.316 "data_size": 63488 00:18:19.316 } 00:18:19.316 ] 00:18:19.316 }' 00:18:19.316 12:17:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:19.316 12:17:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:19.882 12:17:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:19.882 12:17:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:19.882 12:17:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:19.882 12:17:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:19.883 12:17:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:19.883 12:17:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.883 12:17:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.883 12:17:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.883 12:17:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:19.883 12:17:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.883 12:17:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:19.883 "name": "raid_bdev1", 00:18:19.883 "uuid": "fc5b706f-6d2b-471d-aafc-3e7862bcbc52", 00:18:19.883 "strip_size_kb": 0, 00:18:19.883 "state": "online", 00:18:19.883 "raid_level": "raid1", 00:18:19.883 "superblock": true, 00:18:19.883 "num_base_bdevs": 2, 00:18:19.883 "num_base_bdevs_discovered": 2, 00:18:19.883 "num_base_bdevs_operational": 2, 00:18:19.883 "base_bdevs_list": [ 00:18:19.883 { 00:18:19.883 "name": "spare", 00:18:19.883 "uuid": "c23fb3e4-0354-583b-ac47-7117976e7735", 00:18:19.883 "is_configured": true, 00:18:19.883 "data_offset": 2048, 00:18:19.883 "data_size": 63488 00:18:19.883 }, 00:18:19.883 { 00:18:19.883 "name": "BaseBdev2", 00:18:19.883 "uuid": "81dfa88b-cb87-5ce7-bbcc-c20f0ebc0ff5", 00:18:19.883 "is_configured": true, 00:18:19.883 "data_offset": 2048, 00:18:19.883 "data_size": 63488 00:18:19.883 } 00:18:19.883 ] 00:18:19.883 }' 00:18:19.883 12:17:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:19.883 12:17:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:19.883 12:17:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:19.883 12:17:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:19.883 12:17:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:19.883 12:17:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.883 12:17:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.883 12:17:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:19.883 12:17:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.141 12:17:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:20.141 12:17:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:20.141 12:17:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.141 12:17:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:20.141 [2024-11-25 12:17:15.997270] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:20.141 12:17:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.141 12:17:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:20.141 12:17:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:20.141 12:17:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:20.141 12:17:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:20.141 12:17:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:20.141 12:17:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:20.141 12:17:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:20.141 12:17:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:20.141 12:17:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:20.141 12:17:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:20.141 12:17:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.141 12:17:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.141 12:17:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:20.141 12:17:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.141 12:17:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.141 12:17:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.141 "name": "raid_bdev1", 00:18:20.141 "uuid": "fc5b706f-6d2b-471d-aafc-3e7862bcbc52", 00:18:20.141 "strip_size_kb": 0, 00:18:20.141 "state": "online", 00:18:20.141 "raid_level": "raid1", 00:18:20.141 "superblock": true, 00:18:20.141 "num_base_bdevs": 2, 00:18:20.141 "num_base_bdevs_discovered": 1, 00:18:20.141 "num_base_bdevs_operational": 1, 00:18:20.141 "base_bdevs_list": [ 00:18:20.141 { 00:18:20.141 "name": null, 00:18:20.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.141 "is_configured": false, 00:18:20.141 "data_offset": 0, 00:18:20.141 "data_size": 63488 00:18:20.141 }, 00:18:20.141 { 00:18:20.141 "name": "BaseBdev2", 00:18:20.141 "uuid": "81dfa88b-cb87-5ce7-bbcc-c20f0ebc0ff5", 00:18:20.141 "is_configured": true, 00:18:20.141 "data_offset": 2048, 00:18:20.141 "data_size": 63488 00:18:20.141 } 00:18:20.141 ] 00:18:20.141 }' 00:18:20.141 12:17:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.141 12:17:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:20.709 12:17:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:20.709 12:17:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.709 12:17:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:20.709 [2024-11-25 12:17:16.513496] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:20.709 [2024-11-25 12:17:16.513745] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:20.709 [2024-11-25 12:17:16.513772] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:20.709 [2024-11-25 12:17:16.513820] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:20.709 [2024-11-25 12:17:16.529908] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:18:20.709 12:17:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.709 12:17:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:20.709 [2024-11-25 12:17:16.532376] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:21.645 12:17:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:21.645 12:17:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:21.645 12:17:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:21.645 12:17:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:21.645 12:17:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:21.645 12:17:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.645 12:17:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.645 12:17:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.645 12:17:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:21.645 12:17:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.645 12:17:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:21.645 "name": "raid_bdev1", 00:18:21.645 "uuid": "fc5b706f-6d2b-471d-aafc-3e7862bcbc52", 00:18:21.645 "strip_size_kb": 0, 00:18:21.645 "state": "online", 00:18:21.645 "raid_level": "raid1", 00:18:21.645 "superblock": true, 00:18:21.645 "num_base_bdevs": 2, 00:18:21.645 "num_base_bdevs_discovered": 2, 00:18:21.645 "num_base_bdevs_operational": 2, 00:18:21.645 "process": { 00:18:21.645 "type": "rebuild", 00:18:21.645 "target": "spare", 00:18:21.645 "progress": { 00:18:21.645 "blocks": 20480, 00:18:21.645 "percent": 32 00:18:21.645 } 00:18:21.645 }, 00:18:21.645 "base_bdevs_list": [ 00:18:21.645 { 00:18:21.645 "name": "spare", 00:18:21.645 "uuid": "c23fb3e4-0354-583b-ac47-7117976e7735", 00:18:21.645 "is_configured": true, 00:18:21.645 "data_offset": 2048, 00:18:21.645 "data_size": 63488 00:18:21.645 }, 00:18:21.645 { 00:18:21.645 "name": "BaseBdev2", 00:18:21.645 "uuid": "81dfa88b-cb87-5ce7-bbcc-c20f0ebc0ff5", 00:18:21.645 "is_configured": true, 00:18:21.645 "data_offset": 2048, 00:18:21.645 "data_size": 63488 00:18:21.645 } 00:18:21.645 ] 00:18:21.645 }' 00:18:21.645 12:17:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:21.645 12:17:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:21.645 12:17:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:21.645 12:17:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:21.645 12:17:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:21.645 12:17:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.645 12:17:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:21.645 [2024-11-25 12:17:17.698059] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:21.904 [2024-11-25 12:17:17.741453] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:21.904 [2024-11-25 12:17:17.741535] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:21.904 [2024-11-25 12:17:17.741559] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:21.904 [2024-11-25 12:17:17.741574] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:21.904 12:17:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.904 12:17:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:21.904 12:17:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:21.904 12:17:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:21.904 12:17:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:21.904 12:17:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:21.904 12:17:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:21.904 12:17:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:21.904 12:17:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:21.904 12:17:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:21.904 12:17:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:21.904 12:17:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.904 12:17:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.904 12:17:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:21.904 12:17:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.904 12:17:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.904 12:17:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:21.904 "name": "raid_bdev1", 00:18:21.904 "uuid": "fc5b706f-6d2b-471d-aafc-3e7862bcbc52", 00:18:21.904 "strip_size_kb": 0, 00:18:21.904 "state": "online", 00:18:21.904 "raid_level": "raid1", 00:18:21.904 "superblock": true, 00:18:21.904 "num_base_bdevs": 2, 00:18:21.904 "num_base_bdevs_discovered": 1, 00:18:21.904 "num_base_bdevs_operational": 1, 00:18:21.904 "base_bdevs_list": [ 00:18:21.904 { 00:18:21.904 "name": null, 00:18:21.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.904 "is_configured": false, 00:18:21.904 "data_offset": 0, 00:18:21.904 "data_size": 63488 00:18:21.904 }, 00:18:21.904 { 00:18:21.904 "name": "BaseBdev2", 00:18:21.904 "uuid": "81dfa88b-cb87-5ce7-bbcc-c20f0ebc0ff5", 00:18:21.904 "is_configured": true, 00:18:21.904 "data_offset": 2048, 00:18:21.904 "data_size": 63488 00:18:21.904 } 00:18:21.904 ] 00:18:21.904 }' 00:18:21.904 12:17:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:21.904 12:17:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:22.471 12:17:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:22.471 12:17:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.471 12:17:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:22.471 [2024-11-25 12:17:18.324159] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:22.471 [2024-11-25 12:17:18.324253] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:22.471 [2024-11-25 12:17:18.324305] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:22.471 [2024-11-25 12:17:18.324358] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:22.471 [2024-11-25 12:17:18.324979] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:22.471 [2024-11-25 12:17:18.325024] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:22.471 [2024-11-25 12:17:18.325148] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:22.471 [2024-11-25 12:17:18.325176] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:22.471 [2024-11-25 12:17:18.325191] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:22.471 [2024-11-25 12:17:18.325237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:22.471 [2024-11-25 12:17:18.341715] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:18:22.471 spare 00:18:22.471 12:17:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.471 12:17:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:22.471 [2024-11-25 12:17:18.344292] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:23.473 12:17:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:23.473 12:17:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:23.473 12:17:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:23.473 12:17:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:23.473 12:17:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:23.473 12:17:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.473 12:17:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.473 12:17:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.473 12:17:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:23.473 12:17:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.473 12:17:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:23.473 "name": "raid_bdev1", 00:18:23.473 "uuid": "fc5b706f-6d2b-471d-aafc-3e7862bcbc52", 00:18:23.473 "strip_size_kb": 0, 00:18:23.473 "state": "online", 00:18:23.473 "raid_level": "raid1", 00:18:23.473 "superblock": true, 00:18:23.473 "num_base_bdevs": 2, 00:18:23.473 "num_base_bdevs_discovered": 2, 00:18:23.473 "num_base_bdevs_operational": 2, 00:18:23.473 "process": { 00:18:23.473 "type": "rebuild", 00:18:23.473 "target": "spare", 00:18:23.473 "progress": { 00:18:23.473 "blocks": 20480, 00:18:23.473 "percent": 32 00:18:23.473 } 00:18:23.473 }, 00:18:23.473 "base_bdevs_list": [ 00:18:23.473 { 00:18:23.473 "name": "spare", 00:18:23.473 "uuid": "c23fb3e4-0354-583b-ac47-7117976e7735", 00:18:23.473 "is_configured": true, 00:18:23.473 "data_offset": 2048, 00:18:23.473 "data_size": 63488 00:18:23.473 }, 00:18:23.473 { 00:18:23.473 "name": "BaseBdev2", 00:18:23.473 "uuid": "81dfa88b-cb87-5ce7-bbcc-c20f0ebc0ff5", 00:18:23.473 "is_configured": true, 00:18:23.473 "data_offset": 2048, 00:18:23.473 "data_size": 63488 00:18:23.473 } 00:18:23.473 ] 00:18:23.473 }' 00:18:23.473 12:17:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:23.473 12:17:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:23.473 12:17:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:23.473 12:17:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:23.473 12:17:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:23.473 12:17:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.473 12:17:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:23.473 [2024-11-25 12:17:19.493974] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:23.473 [2024-11-25 12:17:19.553303] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:23.473 [2024-11-25 12:17:19.553560] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:23.473 [2024-11-25 12:17:19.553702] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:23.473 [2024-11-25 12:17:19.553754] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:23.730 12:17:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.730 12:17:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:23.730 12:17:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:23.730 12:17:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:23.730 12:17:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:23.730 12:17:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:23.730 12:17:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:23.730 12:17:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:23.730 12:17:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:23.730 12:17:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:23.730 12:17:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:23.730 12:17:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.730 12:17:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.730 12:17:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.730 12:17:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:23.730 12:17:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.730 12:17:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:23.730 "name": "raid_bdev1", 00:18:23.730 "uuid": "fc5b706f-6d2b-471d-aafc-3e7862bcbc52", 00:18:23.730 "strip_size_kb": 0, 00:18:23.730 "state": "online", 00:18:23.730 "raid_level": "raid1", 00:18:23.730 "superblock": true, 00:18:23.730 "num_base_bdevs": 2, 00:18:23.730 "num_base_bdevs_discovered": 1, 00:18:23.730 "num_base_bdevs_operational": 1, 00:18:23.730 "base_bdevs_list": [ 00:18:23.730 { 00:18:23.730 "name": null, 00:18:23.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.730 "is_configured": false, 00:18:23.730 "data_offset": 0, 00:18:23.730 "data_size": 63488 00:18:23.730 }, 00:18:23.730 { 00:18:23.730 "name": "BaseBdev2", 00:18:23.730 "uuid": "81dfa88b-cb87-5ce7-bbcc-c20f0ebc0ff5", 00:18:23.730 "is_configured": true, 00:18:23.730 "data_offset": 2048, 00:18:23.730 "data_size": 63488 00:18:23.730 } 00:18:23.730 ] 00:18:23.730 }' 00:18:23.730 12:17:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:23.730 12:17:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:24.297 12:17:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:24.297 12:17:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:24.297 12:17:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:24.297 12:17:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:24.297 12:17:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:24.297 12:17:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.297 12:17:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.297 12:17:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.297 12:17:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:24.297 12:17:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.297 12:17:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:24.297 "name": "raid_bdev1", 00:18:24.297 "uuid": "fc5b706f-6d2b-471d-aafc-3e7862bcbc52", 00:18:24.297 "strip_size_kb": 0, 00:18:24.297 "state": "online", 00:18:24.297 "raid_level": "raid1", 00:18:24.297 "superblock": true, 00:18:24.297 "num_base_bdevs": 2, 00:18:24.297 "num_base_bdevs_discovered": 1, 00:18:24.297 "num_base_bdevs_operational": 1, 00:18:24.297 "base_bdevs_list": [ 00:18:24.297 { 00:18:24.297 "name": null, 00:18:24.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.297 "is_configured": false, 00:18:24.297 "data_offset": 0, 00:18:24.297 "data_size": 63488 00:18:24.297 }, 00:18:24.297 { 00:18:24.297 "name": "BaseBdev2", 00:18:24.297 "uuid": "81dfa88b-cb87-5ce7-bbcc-c20f0ebc0ff5", 00:18:24.297 "is_configured": true, 00:18:24.297 "data_offset": 2048, 00:18:24.297 "data_size": 63488 00:18:24.297 } 00:18:24.297 ] 00:18:24.297 }' 00:18:24.297 12:17:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:24.297 12:17:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:24.297 12:17:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:24.298 12:17:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:24.298 12:17:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:24.298 12:17:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.298 12:17:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:24.298 12:17:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.298 12:17:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:24.298 12:17:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.298 12:17:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:24.298 [2024-11-25 12:17:20.264504] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:24.298 [2024-11-25 12:17:20.264579] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:24.298 [2024-11-25 12:17:20.264615] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:18:24.298 [2024-11-25 12:17:20.264631] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:24.298 [2024-11-25 12:17:20.265215] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:24.298 [2024-11-25 12:17:20.265257] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:24.298 [2024-11-25 12:17:20.265379] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:24.298 [2024-11-25 12:17:20.265402] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:24.298 [2024-11-25 12:17:20.265416] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:24.298 [2024-11-25 12:17:20.265430] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:24.298 BaseBdev1 00:18:24.298 12:17:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.298 12:17:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:25.233 12:17:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:25.233 12:17:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:25.233 12:17:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:25.233 12:17:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:25.233 12:17:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:25.233 12:17:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:25.233 12:17:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:25.233 12:17:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:25.233 12:17:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:25.233 12:17:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:25.233 12:17:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.233 12:17:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.233 12:17:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.233 12:17:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:25.233 12:17:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.492 12:17:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:25.492 "name": "raid_bdev1", 00:18:25.492 "uuid": "fc5b706f-6d2b-471d-aafc-3e7862bcbc52", 00:18:25.492 "strip_size_kb": 0, 00:18:25.492 "state": "online", 00:18:25.492 "raid_level": "raid1", 00:18:25.492 "superblock": true, 00:18:25.492 "num_base_bdevs": 2, 00:18:25.492 "num_base_bdevs_discovered": 1, 00:18:25.492 "num_base_bdevs_operational": 1, 00:18:25.492 "base_bdevs_list": [ 00:18:25.492 { 00:18:25.492 "name": null, 00:18:25.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.492 "is_configured": false, 00:18:25.492 "data_offset": 0, 00:18:25.492 "data_size": 63488 00:18:25.492 }, 00:18:25.492 { 00:18:25.492 "name": "BaseBdev2", 00:18:25.492 "uuid": "81dfa88b-cb87-5ce7-bbcc-c20f0ebc0ff5", 00:18:25.492 "is_configured": true, 00:18:25.492 "data_offset": 2048, 00:18:25.492 "data_size": 63488 00:18:25.492 } 00:18:25.492 ] 00:18:25.492 }' 00:18:25.492 12:17:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:25.492 12:17:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:25.750 12:17:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:25.750 12:17:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:25.750 12:17:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:25.750 12:17:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:25.750 12:17:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:25.750 12:17:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.750 12:17:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.750 12:17:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:25.750 12:17:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.750 12:17:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.750 12:17:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:25.750 "name": "raid_bdev1", 00:18:25.750 "uuid": "fc5b706f-6d2b-471d-aafc-3e7862bcbc52", 00:18:25.750 "strip_size_kb": 0, 00:18:25.750 "state": "online", 00:18:25.750 "raid_level": "raid1", 00:18:25.750 "superblock": true, 00:18:25.750 "num_base_bdevs": 2, 00:18:25.750 "num_base_bdevs_discovered": 1, 00:18:25.750 "num_base_bdevs_operational": 1, 00:18:25.750 "base_bdevs_list": [ 00:18:25.750 { 00:18:25.750 "name": null, 00:18:25.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.750 "is_configured": false, 00:18:25.750 "data_offset": 0, 00:18:25.750 "data_size": 63488 00:18:25.750 }, 00:18:25.750 { 00:18:25.750 "name": "BaseBdev2", 00:18:25.750 "uuid": "81dfa88b-cb87-5ce7-bbcc-c20f0ebc0ff5", 00:18:25.750 "is_configured": true, 00:18:25.750 "data_offset": 2048, 00:18:25.750 "data_size": 63488 00:18:25.750 } 00:18:25.750 ] 00:18:25.750 }' 00:18:25.750 12:17:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:26.008 12:17:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:26.008 12:17:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:26.009 12:17:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:26.009 12:17:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:26.009 12:17:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:18:26.009 12:17:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:26.009 12:17:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:26.009 12:17:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:26.009 12:17:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:26.009 12:17:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:26.009 12:17:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:26.009 12:17:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.009 12:17:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:26.009 [2024-11-25 12:17:21.945301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:26.009 [2024-11-25 12:17:21.945524] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:26.009 [2024-11-25 12:17:21.945551] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:26.009 request: 00:18:26.009 { 00:18:26.009 "base_bdev": "BaseBdev1", 00:18:26.009 "raid_bdev": "raid_bdev1", 00:18:26.009 "method": "bdev_raid_add_base_bdev", 00:18:26.009 "req_id": 1 00:18:26.009 } 00:18:26.009 Got JSON-RPC error response 00:18:26.009 response: 00:18:26.009 { 00:18:26.009 "code": -22, 00:18:26.009 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:26.009 } 00:18:26.009 12:17:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:26.009 12:17:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:18:26.009 12:17:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:26.009 12:17:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:26.009 12:17:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:26.009 12:17:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:26.942 12:17:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:26.942 12:17:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:26.942 12:17:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:26.942 12:17:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:26.942 12:17:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:26.942 12:17:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:26.942 12:17:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:26.942 12:17:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:26.942 12:17:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:26.942 12:17:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:26.942 12:17:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.942 12:17:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.942 12:17:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.942 12:17:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:26.942 12:17:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.942 12:17:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:26.942 "name": "raid_bdev1", 00:18:26.942 "uuid": "fc5b706f-6d2b-471d-aafc-3e7862bcbc52", 00:18:26.942 "strip_size_kb": 0, 00:18:26.942 "state": "online", 00:18:26.942 "raid_level": "raid1", 00:18:26.942 "superblock": true, 00:18:26.942 "num_base_bdevs": 2, 00:18:26.942 "num_base_bdevs_discovered": 1, 00:18:26.942 "num_base_bdevs_operational": 1, 00:18:26.942 "base_bdevs_list": [ 00:18:26.942 { 00:18:26.942 "name": null, 00:18:26.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.942 "is_configured": false, 00:18:26.942 "data_offset": 0, 00:18:26.942 "data_size": 63488 00:18:26.942 }, 00:18:26.942 { 00:18:26.942 "name": "BaseBdev2", 00:18:26.942 "uuid": "81dfa88b-cb87-5ce7-bbcc-c20f0ebc0ff5", 00:18:26.942 "is_configured": true, 00:18:26.942 "data_offset": 2048, 00:18:26.942 "data_size": 63488 00:18:26.942 } 00:18:26.942 ] 00:18:26.942 }' 00:18:26.942 12:17:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:26.942 12:17:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:27.509 12:17:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:27.509 12:17:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:27.509 12:17:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:27.509 12:17:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:27.509 12:17:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:27.509 12:17:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.509 12:17:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.509 12:17:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:27.509 12:17:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.509 12:17:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.509 12:17:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:27.509 "name": "raid_bdev1", 00:18:27.509 "uuid": "fc5b706f-6d2b-471d-aafc-3e7862bcbc52", 00:18:27.509 "strip_size_kb": 0, 00:18:27.509 "state": "online", 00:18:27.509 "raid_level": "raid1", 00:18:27.509 "superblock": true, 00:18:27.509 "num_base_bdevs": 2, 00:18:27.509 "num_base_bdevs_discovered": 1, 00:18:27.509 "num_base_bdevs_operational": 1, 00:18:27.509 "base_bdevs_list": [ 00:18:27.509 { 00:18:27.509 "name": null, 00:18:27.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.509 "is_configured": false, 00:18:27.509 "data_offset": 0, 00:18:27.509 "data_size": 63488 00:18:27.509 }, 00:18:27.509 { 00:18:27.509 "name": "BaseBdev2", 00:18:27.509 "uuid": "81dfa88b-cb87-5ce7-bbcc-c20f0ebc0ff5", 00:18:27.509 "is_configured": true, 00:18:27.509 "data_offset": 2048, 00:18:27.509 "data_size": 63488 00:18:27.509 } 00:18:27.509 ] 00:18:27.509 }' 00:18:27.509 12:17:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:27.769 12:17:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:27.769 12:17:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:27.769 12:17:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:27.769 12:17:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 77095 00:18:27.769 12:17:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 77095 ']' 00:18:27.769 12:17:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 77095 00:18:27.769 12:17:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:18:27.769 12:17:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:27.769 12:17:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77095 00:18:27.769 killing process with pid 77095 00:18:27.769 Received shutdown signal, test time was about 18.234433 seconds 00:18:27.769 00:18:27.769 Latency(us) 00:18:27.769 [2024-11-25T12:17:23.860Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:27.769 [2024-11-25T12:17:23.860Z] =================================================================================================================== 00:18:27.769 [2024-11-25T12:17:23.860Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:27.769 12:17:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:27.769 12:17:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:27.769 12:17:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77095' 00:18:27.769 12:17:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 77095 00:18:27.769 [2024-11-25 12:17:23.716521] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:27.769 12:17:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 77095 00:18:27.769 [2024-11-25 12:17:23.716737] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:27.769 [2024-11-25 12:17:23.716819] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:27.769 [2024-11-25 12:17:23.716840] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:28.027 [2024-11-25 12:17:23.927311] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:28.960 12:17:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:18:28.960 00:18:28.960 real 0m21.671s 00:18:28.960 user 0m29.517s 00:18:28.960 sys 0m2.068s 00:18:28.960 12:17:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:28.960 12:17:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:28.960 ************************************ 00:18:28.961 END TEST raid_rebuild_test_sb_io 00:18:28.961 ************************************ 00:18:29.221 12:17:25 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:18:29.221 12:17:25 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:18:29.221 12:17:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:29.221 12:17:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:29.221 12:17:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:29.221 ************************************ 00:18:29.221 START TEST raid_rebuild_test 00:18:29.221 ************************************ 00:18:29.221 12:17:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:18:29.221 12:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:29.221 12:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:18:29.221 12:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:18:29.221 12:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:29.221 12:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:29.221 12:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:29.221 12:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:29.221 12:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:29.221 12:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:29.221 12:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:29.221 12:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:29.221 12:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:29.221 12:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:29.221 12:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:18:29.221 12:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:29.221 12:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:29.221 12:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:18:29.221 12:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:29.221 12:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:29.221 12:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:29.221 12:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:29.221 12:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:29.221 12:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:29.221 12:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:29.221 12:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:29.221 12:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:29.221 12:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:29.221 12:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:29.221 12:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:18:29.221 12:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77796 00:18:29.221 12:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:29.221 12:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77796 00:18:29.221 12:17:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 77796 ']' 00:18:29.221 12:17:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:29.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:29.221 12:17:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:29.221 12:17:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:29.221 12:17:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:29.221 12:17:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.221 [2024-11-25 12:17:25.194320] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:18:29.221 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:29.221 Zero copy mechanism will not be used. 00:18:29.221 [2024-11-25 12:17:25.194755] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77796 ] 00:18:29.480 [2024-11-25 12:17:25.385516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.480 [2024-11-25 12:17:25.542905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:29.740 [2024-11-25 12:17:25.764023] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:29.740 [2024-11-25 12:17:25.764104] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:30.004 12:17:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:30.004 12:17:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:18:30.004 12:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:30.004 12:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:30.004 12:17:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.004 12:17:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.263 BaseBdev1_malloc 00:18:30.263 12:17:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.263 12:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:30.263 12:17:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.263 12:17:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.263 [2024-11-25 12:17:26.139530] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:30.263 [2024-11-25 12:17:26.139625] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:30.263 [2024-11-25 12:17:26.139663] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:30.263 [2024-11-25 12:17:26.139684] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:30.263 [2024-11-25 12:17:26.142697] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:30.263 [2024-11-25 12:17:26.142900] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:30.263 BaseBdev1 00:18:30.263 12:17:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.263 12:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:30.263 12:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:30.263 12:17:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.263 12:17:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.263 BaseBdev2_malloc 00:18:30.263 12:17:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.263 12:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:30.263 12:17:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.263 12:17:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.263 [2024-11-25 12:17:26.194186] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:30.263 [2024-11-25 12:17:26.194290] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:30.263 [2024-11-25 12:17:26.194356] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:30.263 [2024-11-25 12:17:26.194391] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:30.263 [2024-11-25 12:17:26.197228] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:30.263 [2024-11-25 12:17:26.197280] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:30.263 BaseBdev2 00:18:30.263 12:17:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.263 12:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:30.263 12:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:30.263 12:17:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.263 12:17:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.263 BaseBdev3_malloc 00:18:30.263 12:17:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.263 12:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:18:30.263 12:17:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.263 12:17:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.263 [2024-11-25 12:17:26.260821] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:18:30.263 [2024-11-25 12:17:26.261047] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:30.263 [2024-11-25 12:17:26.261098] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:30.263 [2024-11-25 12:17:26.261119] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:30.263 [2024-11-25 12:17:26.263995] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:30.263 [2024-11-25 12:17:26.264049] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:30.263 BaseBdev3 00:18:30.263 12:17:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.263 12:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:30.263 12:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:30.263 12:17:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.263 12:17:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.263 BaseBdev4_malloc 00:18:30.263 12:17:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.263 12:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:18:30.263 12:17:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.263 12:17:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.263 [2024-11-25 12:17:26.314039] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:18:30.263 [2024-11-25 12:17:26.314287] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:30.263 [2024-11-25 12:17:26.314366] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:30.263 [2024-11-25 12:17:26.314399] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:30.263 [2024-11-25 12:17:26.317174] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:30.263 [2024-11-25 12:17:26.317229] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:30.263 BaseBdev4 00:18:30.264 12:17:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.264 12:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:30.264 12:17:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.264 12:17:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.523 spare_malloc 00:18:30.523 12:17:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.523 12:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:30.523 12:17:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.523 12:17:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.523 spare_delay 00:18:30.523 12:17:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.523 12:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:30.523 12:17:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.523 12:17:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.523 [2024-11-25 12:17:26.371366] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:30.523 [2024-11-25 12:17:26.371443] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:30.523 [2024-11-25 12:17:26.371475] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:30.523 [2024-11-25 12:17:26.371493] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:30.523 [2024-11-25 12:17:26.374420] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:30.523 [2024-11-25 12:17:26.374472] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:30.523 spare 00:18:30.523 12:17:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.523 12:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:18:30.523 12:17:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.523 12:17:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.523 [2024-11-25 12:17:26.379470] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:30.523 [2024-11-25 12:17:26.381898] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:30.523 [2024-11-25 12:17:26.381996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:30.523 [2024-11-25 12:17:26.382081] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:30.523 [2024-11-25 12:17:26.382223] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:30.523 [2024-11-25 12:17:26.382256] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:18:30.523 [2024-11-25 12:17:26.382636] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:30.523 [2024-11-25 12:17:26.382877] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:30.523 [2024-11-25 12:17:26.382897] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:30.523 [2024-11-25 12:17:26.383096] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:30.523 12:17:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.523 12:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:18:30.523 12:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:30.523 12:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:30.523 12:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:30.523 12:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:30.523 12:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:30.523 12:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:30.523 12:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:30.523 12:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:30.523 12:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:30.523 12:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.523 12:17:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.523 12:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.523 12:17:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.523 12:17:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.523 12:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:30.523 "name": "raid_bdev1", 00:18:30.523 "uuid": "06c4dbd4-15b9-4633-9319-03ad49dadb5a", 00:18:30.523 "strip_size_kb": 0, 00:18:30.523 "state": "online", 00:18:30.523 "raid_level": "raid1", 00:18:30.523 "superblock": false, 00:18:30.524 "num_base_bdevs": 4, 00:18:30.524 "num_base_bdevs_discovered": 4, 00:18:30.524 "num_base_bdevs_operational": 4, 00:18:30.524 "base_bdevs_list": [ 00:18:30.524 { 00:18:30.524 "name": "BaseBdev1", 00:18:30.524 "uuid": "2ed4b809-d1ec-52b7-b4d7-3d827e256f82", 00:18:30.524 "is_configured": true, 00:18:30.524 "data_offset": 0, 00:18:30.524 "data_size": 65536 00:18:30.524 }, 00:18:30.524 { 00:18:30.524 "name": "BaseBdev2", 00:18:30.524 "uuid": "d0365a19-a850-5d5d-8ed6-3718e56918fa", 00:18:30.524 "is_configured": true, 00:18:30.524 "data_offset": 0, 00:18:30.524 "data_size": 65536 00:18:30.524 }, 00:18:30.524 { 00:18:30.524 "name": "BaseBdev3", 00:18:30.524 "uuid": "7f20ed7f-b396-5f9d-b741-a9e4f2430fa7", 00:18:30.524 "is_configured": true, 00:18:30.524 "data_offset": 0, 00:18:30.524 "data_size": 65536 00:18:30.524 }, 00:18:30.524 { 00:18:30.524 "name": "BaseBdev4", 00:18:30.524 "uuid": "8341748e-ca88-54b9-a9cd-4f4cff10a20e", 00:18:30.524 "is_configured": true, 00:18:30.524 "data_offset": 0, 00:18:30.524 "data_size": 65536 00:18:30.524 } 00:18:30.524 ] 00:18:30.524 }' 00:18:30.524 12:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:30.524 12:17:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.092 12:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:31.092 12:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:31.092 12:17:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.092 12:17:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.092 [2024-11-25 12:17:26.908097] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:31.092 12:17:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.092 12:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:18:31.092 12:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:31.092 12:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.092 12:17:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.092 12:17:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.092 12:17:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.092 12:17:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:18:31.092 12:17:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:31.092 12:17:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:31.092 12:17:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:31.092 12:17:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:31.092 12:17:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:31.092 12:17:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:31.092 12:17:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:31.092 12:17:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:31.092 12:17:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:31.092 12:17:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:31.092 12:17:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:31.092 12:17:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:31.092 12:17:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:31.351 [2024-11-25 12:17:27.291837] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:31.351 /dev/nbd0 00:18:31.351 12:17:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:31.351 12:17:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:31.351 12:17:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:31.351 12:17:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:18:31.351 12:17:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:31.351 12:17:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:31.351 12:17:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:31.351 12:17:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:18:31.351 12:17:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:31.351 12:17:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:31.351 12:17:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:31.351 1+0 records in 00:18:31.351 1+0 records out 00:18:31.351 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000654952 s, 6.3 MB/s 00:18:31.351 12:17:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:31.351 12:17:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:18:31.351 12:17:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:31.351 12:17:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:31.352 12:17:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:18:31.352 12:17:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:31.352 12:17:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:31.352 12:17:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:18:31.352 12:17:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:18:31.352 12:17:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:18:41.342 65536+0 records in 00:18:41.342 65536+0 records out 00:18:41.342 33554432 bytes (34 MB, 32 MiB) copied, 8.6461 s, 3.9 MB/s 00:18:41.342 12:17:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:41.342 12:17:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:41.342 12:17:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:41.342 12:17:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:41.342 12:17:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:41.342 12:17:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:41.342 12:17:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:41.342 12:17:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:41.342 12:17:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:41.342 12:17:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:41.342 [2024-11-25 12:17:36.281963] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:41.342 12:17:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:41.342 12:17:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:41.342 12:17:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:41.342 12:17:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:41.342 12:17:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:41.342 12:17:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:41.342 12:17:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.342 12:17:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.342 [2024-11-25 12:17:36.290078] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:41.342 12:17:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.342 12:17:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:41.342 12:17:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:41.342 12:17:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:41.342 12:17:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:41.342 12:17:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:41.342 12:17:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:41.342 12:17:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:41.342 12:17:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:41.342 12:17:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:41.342 12:17:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:41.342 12:17:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.342 12:17:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.342 12:17:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.342 12:17:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.342 12:17:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.342 12:17:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:41.342 "name": "raid_bdev1", 00:18:41.342 "uuid": "06c4dbd4-15b9-4633-9319-03ad49dadb5a", 00:18:41.342 "strip_size_kb": 0, 00:18:41.342 "state": "online", 00:18:41.342 "raid_level": "raid1", 00:18:41.342 "superblock": false, 00:18:41.342 "num_base_bdevs": 4, 00:18:41.342 "num_base_bdevs_discovered": 3, 00:18:41.342 "num_base_bdevs_operational": 3, 00:18:41.342 "base_bdevs_list": [ 00:18:41.342 { 00:18:41.342 "name": null, 00:18:41.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.342 "is_configured": false, 00:18:41.342 "data_offset": 0, 00:18:41.342 "data_size": 65536 00:18:41.342 }, 00:18:41.342 { 00:18:41.342 "name": "BaseBdev2", 00:18:41.342 "uuid": "d0365a19-a850-5d5d-8ed6-3718e56918fa", 00:18:41.342 "is_configured": true, 00:18:41.342 "data_offset": 0, 00:18:41.343 "data_size": 65536 00:18:41.343 }, 00:18:41.343 { 00:18:41.343 "name": "BaseBdev3", 00:18:41.343 "uuid": "7f20ed7f-b396-5f9d-b741-a9e4f2430fa7", 00:18:41.343 "is_configured": true, 00:18:41.343 "data_offset": 0, 00:18:41.343 "data_size": 65536 00:18:41.343 }, 00:18:41.343 { 00:18:41.343 "name": "BaseBdev4", 00:18:41.343 "uuid": "8341748e-ca88-54b9-a9cd-4f4cff10a20e", 00:18:41.343 "is_configured": true, 00:18:41.343 "data_offset": 0, 00:18:41.343 "data_size": 65536 00:18:41.343 } 00:18:41.343 ] 00:18:41.343 }' 00:18:41.343 12:17:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:41.343 12:17:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.343 12:17:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:41.343 12:17:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.343 12:17:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.343 [2024-11-25 12:17:36.790246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:41.343 [2024-11-25 12:17:36.804556] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:18:41.343 12:17:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.343 12:17:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:41.343 [2024-11-25 12:17:36.807228] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:41.910 12:17:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:41.910 12:17:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:41.910 12:17:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:41.910 12:17:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:41.910 12:17:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:41.910 12:17:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.910 12:17:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.910 12:17:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.910 12:17:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.910 12:17:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.910 12:17:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:41.910 "name": "raid_bdev1", 00:18:41.910 "uuid": "06c4dbd4-15b9-4633-9319-03ad49dadb5a", 00:18:41.910 "strip_size_kb": 0, 00:18:41.910 "state": "online", 00:18:41.910 "raid_level": "raid1", 00:18:41.910 "superblock": false, 00:18:41.910 "num_base_bdevs": 4, 00:18:41.910 "num_base_bdevs_discovered": 4, 00:18:41.910 "num_base_bdevs_operational": 4, 00:18:41.910 "process": { 00:18:41.910 "type": "rebuild", 00:18:41.910 "target": "spare", 00:18:41.910 "progress": { 00:18:41.910 "blocks": 20480, 00:18:41.910 "percent": 31 00:18:41.910 } 00:18:41.910 }, 00:18:41.910 "base_bdevs_list": [ 00:18:41.910 { 00:18:41.910 "name": "spare", 00:18:41.910 "uuid": "82d9a3c5-c203-5434-af01-71b5b37fd62a", 00:18:41.910 "is_configured": true, 00:18:41.910 "data_offset": 0, 00:18:41.910 "data_size": 65536 00:18:41.910 }, 00:18:41.910 { 00:18:41.910 "name": "BaseBdev2", 00:18:41.910 "uuid": "d0365a19-a850-5d5d-8ed6-3718e56918fa", 00:18:41.910 "is_configured": true, 00:18:41.910 "data_offset": 0, 00:18:41.910 "data_size": 65536 00:18:41.910 }, 00:18:41.910 { 00:18:41.910 "name": "BaseBdev3", 00:18:41.910 "uuid": "7f20ed7f-b396-5f9d-b741-a9e4f2430fa7", 00:18:41.910 "is_configured": true, 00:18:41.910 "data_offset": 0, 00:18:41.910 "data_size": 65536 00:18:41.910 }, 00:18:41.910 { 00:18:41.910 "name": "BaseBdev4", 00:18:41.910 "uuid": "8341748e-ca88-54b9-a9cd-4f4cff10a20e", 00:18:41.910 "is_configured": true, 00:18:41.910 "data_offset": 0, 00:18:41.910 "data_size": 65536 00:18:41.910 } 00:18:41.910 ] 00:18:41.910 }' 00:18:41.910 12:17:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:41.910 12:17:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:41.910 12:17:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:41.910 12:17:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:41.910 12:17:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:41.910 12:17:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.910 12:17:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.910 [2024-11-25 12:17:37.960264] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:42.169 [2024-11-25 12:17:38.016163] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:42.169 [2024-11-25 12:17:38.016301] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:42.169 [2024-11-25 12:17:38.016332] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:42.169 [2024-11-25 12:17:38.016374] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:42.169 12:17:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.169 12:17:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:42.169 12:17:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:42.169 12:17:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:42.169 12:17:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:42.169 12:17:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:42.169 12:17:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:42.169 12:17:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:42.169 12:17:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:42.169 12:17:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:42.169 12:17:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:42.169 12:17:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.169 12:17:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.169 12:17:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.169 12:17:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.169 12:17:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.169 12:17:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:42.169 "name": "raid_bdev1", 00:18:42.169 "uuid": "06c4dbd4-15b9-4633-9319-03ad49dadb5a", 00:18:42.169 "strip_size_kb": 0, 00:18:42.169 "state": "online", 00:18:42.169 "raid_level": "raid1", 00:18:42.169 "superblock": false, 00:18:42.169 "num_base_bdevs": 4, 00:18:42.169 "num_base_bdevs_discovered": 3, 00:18:42.169 "num_base_bdevs_operational": 3, 00:18:42.169 "base_bdevs_list": [ 00:18:42.169 { 00:18:42.169 "name": null, 00:18:42.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.169 "is_configured": false, 00:18:42.169 "data_offset": 0, 00:18:42.169 "data_size": 65536 00:18:42.169 }, 00:18:42.169 { 00:18:42.169 "name": "BaseBdev2", 00:18:42.169 "uuid": "d0365a19-a850-5d5d-8ed6-3718e56918fa", 00:18:42.169 "is_configured": true, 00:18:42.169 "data_offset": 0, 00:18:42.169 "data_size": 65536 00:18:42.169 }, 00:18:42.169 { 00:18:42.169 "name": "BaseBdev3", 00:18:42.169 "uuid": "7f20ed7f-b396-5f9d-b741-a9e4f2430fa7", 00:18:42.169 "is_configured": true, 00:18:42.169 "data_offset": 0, 00:18:42.169 "data_size": 65536 00:18:42.169 }, 00:18:42.169 { 00:18:42.169 "name": "BaseBdev4", 00:18:42.169 "uuid": "8341748e-ca88-54b9-a9cd-4f4cff10a20e", 00:18:42.169 "is_configured": true, 00:18:42.169 "data_offset": 0, 00:18:42.169 "data_size": 65536 00:18:42.169 } 00:18:42.169 ] 00:18:42.169 }' 00:18:42.169 12:17:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:42.169 12:17:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.737 12:17:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:42.737 12:17:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:42.737 12:17:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:42.737 12:17:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:42.737 12:17:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:42.737 12:17:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.737 12:17:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.737 12:17:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.737 12:17:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.737 12:17:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.737 12:17:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:42.737 "name": "raid_bdev1", 00:18:42.737 "uuid": "06c4dbd4-15b9-4633-9319-03ad49dadb5a", 00:18:42.737 "strip_size_kb": 0, 00:18:42.737 "state": "online", 00:18:42.737 "raid_level": "raid1", 00:18:42.737 "superblock": false, 00:18:42.737 "num_base_bdevs": 4, 00:18:42.737 "num_base_bdevs_discovered": 3, 00:18:42.737 "num_base_bdevs_operational": 3, 00:18:42.737 "base_bdevs_list": [ 00:18:42.737 { 00:18:42.737 "name": null, 00:18:42.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.737 "is_configured": false, 00:18:42.737 "data_offset": 0, 00:18:42.737 "data_size": 65536 00:18:42.737 }, 00:18:42.737 { 00:18:42.737 "name": "BaseBdev2", 00:18:42.737 "uuid": "d0365a19-a850-5d5d-8ed6-3718e56918fa", 00:18:42.737 "is_configured": true, 00:18:42.737 "data_offset": 0, 00:18:42.737 "data_size": 65536 00:18:42.737 }, 00:18:42.737 { 00:18:42.737 "name": "BaseBdev3", 00:18:42.737 "uuid": "7f20ed7f-b396-5f9d-b741-a9e4f2430fa7", 00:18:42.737 "is_configured": true, 00:18:42.737 "data_offset": 0, 00:18:42.737 "data_size": 65536 00:18:42.737 }, 00:18:42.737 { 00:18:42.737 "name": "BaseBdev4", 00:18:42.737 "uuid": "8341748e-ca88-54b9-a9cd-4f4cff10a20e", 00:18:42.737 "is_configured": true, 00:18:42.737 "data_offset": 0, 00:18:42.737 "data_size": 65536 00:18:42.737 } 00:18:42.737 ] 00:18:42.737 }' 00:18:42.737 12:17:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:42.737 12:17:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:42.737 12:17:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:42.737 12:17:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:42.737 12:17:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:42.737 12:17:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.737 12:17:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.737 [2024-11-25 12:17:38.716843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:42.737 [2024-11-25 12:17:38.730857] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:18:42.737 12:17:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.737 12:17:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:42.737 [2024-11-25 12:17:38.733564] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:43.675 12:17:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:43.675 12:17:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:43.675 12:17:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:43.675 12:17:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:43.675 12:17:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:43.675 12:17:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.675 12:17:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.675 12:17:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.675 12:17:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.675 12:17:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.953 12:17:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:43.953 "name": "raid_bdev1", 00:18:43.953 "uuid": "06c4dbd4-15b9-4633-9319-03ad49dadb5a", 00:18:43.953 "strip_size_kb": 0, 00:18:43.953 "state": "online", 00:18:43.953 "raid_level": "raid1", 00:18:43.953 "superblock": false, 00:18:43.953 "num_base_bdevs": 4, 00:18:43.953 "num_base_bdevs_discovered": 4, 00:18:43.953 "num_base_bdevs_operational": 4, 00:18:43.953 "process": { 00:18:43.953 "type": "rebuild", 00:18:43.953 "target": "spare", 00:18:43.953 "progress": { 00:18:43.953 "blocks": 20480, 00:18:43.953 "percent": 31 00:18:43.953 } 00:18:43.953 }, 00:18:43.953 "base_bdevs_list": [ 00:18:43.953 { 00:18:43.953 "name": "spare", 00:18:43.953 "uuid": "82d9a3c5-c203-5434-af01-71b5b37fd62a", 00:18:43.953 "is_configured": true, 00:18:43.953 "data_offset": 0, 00:18:43.953 "data_size": 65536 00:18:43.953 }, 00:18:43.953 { 00:18:43.953 "name": "BaseBdev2", 00:18:43.953 "uuid": "d0365a19-a850-5d5d-8ed6-3718e56918fa", 00:18:43.953 "is_configured": true, 00:18:43.953 "data_offset": 0, 00:18:43.953 "data_size": 65536 00:18:43.953 }, 00:18:43.953 { 00:18:43.953 "name": "BaseBdev3", 00:18:43.953 "uuid": "7f20ed7f-b396-5f9d-b741-a9e4f2430fa7", 00:18:43.953 "is_configured": true, 00:18:43.953 "data_offset": 0, 00:18:43.953 "data_size": 65536 00:18:43.953 }, 00:18:43.953 { 00:18:43.953 "name": "BaseBdev4", 00:18:43.953 "uuid": "8341748e-ca88-54b9-a9cd-4f4cff10a20e", 00:18:43.953 "is_configured": true, 00:18:43.953 "data_offset": 0, 00:18:43.953 "data_size": 65536 00:18:43.953 } 00:18:43.953 ] 00:18:43.953 }' 00:18:43.953 12:17:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:43.953 12:17:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:43.953 12:17:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:43.953 12:17:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:43.953 12:17:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:18:43.953 12:17:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:18:43.953 12:17:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:43.953 12:17:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:18:43.953 12:17:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:43.953 12:17:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.953 12:17:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.953 [2024-11-25 12:17:39.918983] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:43.953 [2024-11-25 12:17:39.942950] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:18:43.953 12:17:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.953 12:17:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:18:43.953 12:17:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:18:43.953 12:17:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:43.953 12:17:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:43.953 12:17:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:43.953 12:17:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:43.953 12:17:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:43.953 12:17:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.953 12:17:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.953 12:17:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.953 12:17:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.953 12:17:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.953 12:17:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:43.953 "name": "raid_bdev1", 00:18:43.953 "uuid": "06c4dbd4-15b9-4633-9319-03ad49dadb5a", 00:18:43.953 "strip_size_kb": 0, 00:18:43.953 "state": "online", 00:18:43.953 "raid_level": "raid1", 00:18:43.953 "superblock": false, 00:18:43.953 "num_base_bdevs": 4, 00:18:43.953 "num_base_bdevs_discovered": 3, 00:18:43.953 "num_base_bdevs_operational": 3, 00:18:43.953 "process": { 00:18:43.953 "type": "rebuild", 00:18:43.953 "target": "spare", 00:18:43.953 "progress": { 00:18:43.953 "blocks": 24576, 00:18:43.953 "percent": 37 00:18:43.953 } 00:18:43.953 }, 00:18:43.953 "base_bdevs_list": [ 00:18:43.953 { 00:18:43.953 "name": "spare", 00:18:43.953 "uuid": "82d9a3c5-c203-5434-af01-71b5b37fd62a", 00:18:43.953 "is_configured": true, 00:18:43.953 "data_offset": 0, 00:18:43.953 "data_size": 65536 00:18:43.953 }, 00:18:43.953 { 00:18:43.953 "name": null, 00:18:43.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.954 "is_configured": false, 00:18:43.954 "data_offset": 0, 00:18:43.954 "data_size": 65536 00:18:43.954 }, 00:18:43.954 { 00:18:43.954 "name": "BaseBdev3", 00:18:43.954 "uuid": "7f20ed7f-b396-5f9d-b741-a9e4f2430fa7", 00:18:43.954 "is_configured": true, 00:18:43.954 "data_offset": 0, 00:18:43.954 "data_size": 65536 00:18:43.954 }, 00:18:43.954 { 00:18:43.954 "name": "BaseBdev4", 00:18:43.954 "uuid": "8341748e-ca88-54b9-a9cd-4f4cff10a20e", 00:18:43.954 "is_configured": true, 00:18:43.954 "data_offset": 0, 00:18:43.954 "data_size": 65536 00:18:43.954 } 00:18:43.954 ] 00:18:43.954 }' 00:18:43.954 12:17:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:44.212 12:17:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:44.212 12:17:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:44.212 12:17:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:44.212 12:17:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=479 00:18:44.212 12:17:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:44.212 12:17:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:44.212 12:17:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:44.212 12:17:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:44.212 12:17:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:44.212 12:17:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:44.212 12:17:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.212 12:17:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.212 12:17:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.212 12:17:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.212 12:17:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.212 12:17:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:44.212 "name": "raid_bdev1", 00:18:44.212 "uuid": "06c4dbd4-15b9-4633-9319-03ad49dadb5a", 00:18:44.212 "strip_size_kb": 0, 00:18:44.212 "state": "online", 00:18:44.212 "raid_level": "raid1", 00:18:44.212 "superblock": false, 00:18:44.212 "num_base_bdevs": 4, 00:18:44.212 "num_base_bdevs_discovered": 3, 00:18:44.212 "num_base_bdevs_operational": 3, 00:18:44.212 "process": { 00:18:44.212 "type": "rebuild", 00:18:44.212 "target": "spare", 00:18:44.212 "progress": { 00:18:44.212 "blocks": 26624, 00:18:44.212 "percent": 40 00:18:44.212 } 00:18:44.212 }, 00:18:44.212 "base_bdevs_list": [ 00:18:44.212 { 00:18:44.212 "name": "spare", 00:18:44.212 "uuid": "82d9a3c5-c203-5434-af01-71b5b37fd62a", 00:18:44.212 "is_configured": true, 00:18:44.212 "data_offset": 0, 00:18:44.212 "data_size": 65536 00:18:44.212 }, 00:18:44.212 { 00:18:44.212 "name": null, 00:18:44.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.212 "is_configured": false, 00:18:44.212 "data_offset": 0, 00:18:44.212 "data_size": 65536 00:18:44.212 }, 00:18:44.212 { 00:18:44.212 "name": "BaseBdev3", 00:18:44.212 "uuid": "7f20ed7f-b396-5f9d-b741-a9e4f2430fa7", 00:18:44.212 "is_configured": true, 00:18:44.212 "data_offset": 0, 00:18:44.212 "data_size": 65536 00:18:44.212 }, 00:18:44.212 { 00:18:44.212 "name": "BaseBdev4", 00:18:44.212 "uuid": "8341748e-ca88-54b9-a9cd-4f4cff10a20e", 00:18:44.212 "is_configured": true, 00:18:44.212 "data_offset": 0, 00:18:44.212 "data_size": 65536 00:18:44.212 } 00:18:44.212 ] 00:18:44.212 }' 00:18:44.212 12:17:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:44.212 12:17:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:44.212 12:17:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:44.212 12:17:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:44.212 12:17:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:45.588 12:17:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:45.588 12:17:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:45.588 12:17:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:45.588 12:17:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:45.588 12:17:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:45.588 12:17:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:45.588 12:17:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.588 12:17:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.588 12:17:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.588 12:17:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.588 12:17:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.588 12:17:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:45.588 "name": "raid_bdev1", 00:18:45.588 "uuid": "06c4dbd4-15b9-4633-9319-03ad49dadb5a", 00:18:45.588 "strip_size_kb": 0, 00:18:45.588 "state": "online", 00:18:45.588 "raid_level": "raid1", 00:18:45.588 "superblock": false, 00:18:45.588 "num_base_bdevs": 4, 00:18:45.588 "num_base_bdevs_discovered": 3, 00:18:45.588 "num_base_bdevs_operational": 3, 00:18:45.588 "process": { 00:18:45.588 "type": "rebuild", 00:18:45.588 "target": "spare", 00:18:45.588 "progress": { 00:18:45.588 "blocks": 51200, 00:18:45.588 "percent": 78 00:18:45.588 } 00:18:45.588 }, 00:18:45.588 "base_bdevs_list": [ 00:18:45.588 { 00:18:45.588 "name": "spare", 00:18:45.588 "uuid": "82d9a3c5-c203-5434-af01-71b5b37fd62a", 00:18:45.588 "is_configured": true, 00:18:45.588 "data_offset": 0, 00:18:45.588 "data_size": 65536 00:18:45.588 }, 00:18:45.588 { 00:18:45.588 "name": null, 00:18:45.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.588 "is_configured": false, 00:18:45.588 "data_offset": 0, 00:18:45.588 "data_size": 65536 00:18:45.588 }, 00:18:45.588 { 00:18:45.588 "name": "BaseBdev3", 00:18:45.588 "uuid": "7f20ed7f-b396-5f9d-b741-a9e4f2430fa7", 00:18:45.588 "is_configured": true, 00:18:45.588 "data_offset": 0, 00:18:45.588 "data_size": 65536 00:18:45.588 }, 00:18:45.588 { 00:18:45.588 "name": "BaseBdev4", 00:18:45.589 "uuid": "8341748e-ca88-54b9-a9cd-4f4cff10a20e", 00:18:45.589 "is_configured": true, 00:18:45.589 "data_offset": 0, 00:18:45.589 "data_size": 65536 00:18:45.589 } 00:18:45.589 ] 00:18:45.589 }' 00:18:45.589 12:17:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:45.589 12:17:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:45.589 12:17:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:45.589 12:17:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:45.589 12:17:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:46.154 [2024-11-25 12:17:41.957960] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:46.154 [2024-11-25 12:17:41.958075] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:46.154 [2024-11-25 12:17:41.958142] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:46.412 12:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:46.412 12:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:46.412 12:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:46.412 12:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:46.412 12:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:46.412 12:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:46.412 12:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.413 12:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.413 12:17:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.413 12:17:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.413 12:17:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.671 12:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:46.671 "name": "raid_bdev1", 00:18:46.671 "uuid": "06c4dbd4-15b9-4633-9319-03ad49dadb5a", 00:18:46.671 "strip_size_kb": 0, 00:18:46.671 "state": "online", 00:18:46.671 "raid_level": "raid1", 00:18:46.671 "superblock": false, 00:18:46.671 "num_base_bdevs": 4, 00:18:46.671 "num_base_bdevs_discovered": 3, 00:18:46.671 "num_base_bdevs_operational": 3, 00:18:46.671 "base_bdevs_list": [ 00:18:46.671 { 00:18:46.671 "name": "spare", 00:18:46.671 "uuid": "82d9a3c5-c203-5434-af01-71b5b37fd62a", 00:18:46.671 "is_configured": true, 00:18:46.671 "data_offset": 0, 00:18:46.671 "data_size": 65536 00:18:46.671 }, 00:18:46.671 { 00:18:46.671 "name": null, 00:18:46.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.671 "is_configured": false, 00:18:46.671 "data_offset": 0, 00:18:46.671 "data_size": 65536 00:18:46.671 }, 00:18:46.671 { 00:18:46.671 "name": "BaseBdev3", 00:18:46.671 "uuid": "7f20ed7f-b396-5f9d-b741-a9e4f2430fa7", 00:18:46.671 "is_configured": true, 00:18:46.671 "data_offset": 0, 00:18:46.671 "data_size": 65536 00:18:46.671 }, 00:18:46.671 { 00:18:46.671 "name": "BaseBdev4", 00:18:46.671 "uuid": "8341748e-ca88-54b9-a9cd-4f4cff10a20e", 00:18:46.671 "is_configured": true, 00:18:46.671 "data_offset": 0, 00:18:46.671 "data_size": 65536 00:18:46.671 } 00:18:46.671 ] 00:18:46.671 }' 00:18:46.671 12:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:46.671 12:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:46.671 12:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:46.671 12:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:46.671 12:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:18:46.671 12:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:46.671 12:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:46.671 12:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:46.671 12:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:46.671 12:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:46.671 12:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.671 12:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.671 12:17:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.671 12:17:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.671 12:17:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.671 12:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:46.671 "name": "raid_bdev1", 00:18:46.671 "uuid": "06c4dbd4-15b9-4633-9319-03ad49dadb5a", 00:18:46.671 "strip_size_kb": 0, 00:18:46.671 "state": "online", 00:18:46.671 "raid_level": "raid1", 00:18:46.671 "superblock": false, 00:18:46.671 "num_base_bdevs": 4, 00:18:46.671 "num_base_bdevs_discovered": 3, 00:18:46.671 "num_base_bdevs_operational": 3, 00:18:46.671 "base_bdevs_list": [ 00:18:46.671 { 00:18:46.671 "name": "spare", 00:18:46.671 "uuid": "82d9a3c5-c203-5434-af01-71b5b37fd62a", 00:18:46.671 "is_configured": true, 00:18:46.671 "data_offset": 0, 00:18:46.671 "data_size": 65536 00:18:46.671 }, 00:18:46.671 { 00:18:46.671 "name": null, 00:18:46.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.671 "is_configured": false, 00:18:46.671 "data_offset": 0, 00:18:46.671 "data_size": 65536 00:18:46.671 }, 00:18:46.671 { 00:18:46.671 "name": "BaseBdev3", 00:18:46.671 "uuid": "7f20ed7f-b396-5f9d-b741-a9e4f2430fa7", 00:18:46.671 "is_configured": true, 00:18:46.671 "data_offset": 0, 00:18:46.671 "data_size": 65536 00:18:46.671 }, 00:18:46.671 { 00:18:46.671 "name": "BaseBdev4", 00:18:46.671 "uuid": "8341748e-ca88-54b9-a9cd-4f4cff10a20e", 00:18:46.671 "is_configured": true, 00:18:46.671 "data_offset": 0, 00:18:46.671 "data_size": 65536 00:18:46.671 } 00:18:46.671 ] 00:18:46.671 }' 00:18:46.671 12:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:46.671 12:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:46.671 12:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:46.671 12:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:46.671 12:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:46.671 12:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:46.671 12:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:46.671 12:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:46.671 12:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:46.671 12:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:46.671 12:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:46.671 12:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:46.671 12:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:46.671 12:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:46.930 12:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.930 12:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.930 12:17:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.930 12:17:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.930 12:17:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.930 12:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:46.930 "name": "raid_bdev1", 00:18:46.930 "uuid": "06c4dbd4-15b9-4633-9319-03ad49dadb5a", 00:18:46.930 "strip_size_kb": 0, 00:18:46.930 "state": "online", 00:18:46.930 "raid_level": "raid1", 00:18:46.930 "superblock": false, 00:18:46.930 "num_base_bdevs": 4, 00:18:46.930 "num_base_bdevs_discovered": 3, 00:18:46.930 "num_base_bdevs_operational": 3, 00:18:46.930 "base_bdevs_list": [ 00:18:46.930 { 00:18:46.930 "name": "spare", 00:18:46.930 "uuid": "82d9a3c5-c203-5434-af01-71b5b37fd62a", 00:18:46.930 "is_configured": true, 00:18:46.930 "data_offset": 0, 00:18:46.930 "data_size": 65536 00:18:46.930 }, 00:18:46.930 { 00:18:46.930 "name": null, 00:18:46.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.930 "is_configured": false, 00:18:46.930 "data_offset": 0, 00:18:46.930 "data_size": 65536 00:18:46.930 }, 00:18:46.930 { 00:18:46.930 "name": "BaseBdev3", 00:18:46.930 "uuid": "7f20ed7f-b396-5f9d-b741-a9e4f2430fa7", 00:18:46.930 "is_configured": true, 00:18:46.930 "data_offset": 0, 00:18:46.930 "data_size": 65536 00:18:46.930 }, 00:18:46.930 { 00:18:46.930 "name": "BaseBdev4", 00:18:46.930 "uuid": "8341748e-ca88-54b9-a9cd-4f4cff10a20e", 00:18:46.930 "is_configured": true, 00:18:46.930 "data_offset": 0, 00:18:46.930 "data_size": 65536 00:18:46.930 } 00:18:46.930 ] 00:18:46.930 }' 00:18:46.930 12:17:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:46.930 12:17:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.497 12:17:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:47.497 12:17:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.497 12:17:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.497 [2024-11-25 12:17:43.334703] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:47.497 [2024-11-25 12:17:43.335092] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:47.497 [2024-11-25 12:17:43.335394] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:47.497 [2024-11-25 12:17:43.335685] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:47.497 [2024-11-25 12:17:43.335718] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:47.497 12:17:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.497 12:17:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.497 12:17:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:18:47.497 12:17:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.497 12:17:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.497 12:17:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.497 12:17:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:47.497 12:17:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:47.497 12:17:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:47.497 12:17:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:47.497 12:17:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:47.497 12:17:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:47.497 12:17:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:47.497 12:17:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:47.497 12:17:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:47.497 12:17:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:47.497 12:17:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:47.497 12:17:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:47.497 12:17:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:47.755 /dev/nbd0 00:18:47.755 12:17:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:47.755 12:17:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:47.755 12:17:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:47.755 12:17:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:18:47.755 12:17:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:47.755 12:17:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:47.755 12:17:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:47.755 12:17:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:18:47.755 12:17:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:47.755 12:17:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:47.755 12:17:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:47.755 1+0 records in 00:18:47.755 1+0 records out 00:18:47.755 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000598971 s, 6.8 MB/s 00:18:47.755 12:17:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:47.755 12:17:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:18:47.755 12:17:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:47.755 12:17:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:47.755 12:17:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:18:47.755 12:17:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:47.755 12:17:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:47.755 12:17:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:48.013 /dev/nbd1 00:18:48.013 12:17:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:48.013 12:17:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:48.013 12:17:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:48.013 12:17:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:18:48.013 12:17:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:48.013 12:17:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:48.013 12:17:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:48.013 12:17:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:18:48.013 12:17:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:48.013 12:17:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:48.013 12:17:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:48.013 1+0 records in 00:18:48.013 1+0 records out 00:18:48.013 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000454753 s, 9.0 MB/s 00:18:48.013 12:17:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:48.013 12:17:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:18:48.013 12:17:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:48.013 12:17:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:48.013 12:17:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:18:48.013 12:17:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:48.013 12:17:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:48.013 12:17:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:18:48.271 12:17:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:48.272 12:17:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:48.272 12:17:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:48.272 12:17:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:48.272 12:17:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:48.272 12:17:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:48.272 12:17:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:48.530 12:17:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:48.530 12:17:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:48.530 12:17:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:48.530 12:17:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:48.530 12:17:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:48.530 12:17:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:48.530 12:17:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:48.530 12:17:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:48.530 12:17:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:48.530 12:17:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:48.788 12:17:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:48.788 12:17:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:48.788 12:17:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:48.788 12:17:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:48.788 12:17:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:48.788 12:17:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:48.788 12:17:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:48.788 12:17:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:48.788 12:17:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:18:48.788 12:17:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77796 00:18:48.788 12:17:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 77796 ']' 00:18:48.788 12:17:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 77796 00:18:48.788 12:17:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:18:48.788 12:17:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:48.788 12:17:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77796 00:18:48.788 12:17:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:48.788 12:17:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:48.788 killing process with pid 77796 00:18:48.788 12:17:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77796' 00:18:48.788 12:17:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 77796 00:18:48.788 Received shutdown signal, test time was about 60.000000 seconds 00:18:48.788 00:18:48.788 Latency(us) 00:18:48.788 [2024-11-25T12:17:44.879Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:48.788 [2024-11-25T12:17:44.879Z] =================================================================================================================== 00:18:48.788 [2024-11-25T12:17:44.879Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:48.788 12:17:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 77796 00:18:48.788 [2024-11-25 12:17:44.841672] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:49.354 [2024-11-25 12:17:45.295043] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:50.287 12:17:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:18:50.288 00:18:50.288 real 0m21.266s 00:18:50.288 user 0m23.968s 00:18:50.288 sys 0m3.818s 00:18:50.288 12:17:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:50.288 ************************************ 00:18:50.288 12:17:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:50.288 END TEST raid_rebuild_test 00:18:50.288 ************************************ 00:18:50.546 12:17:46 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:18:50.546 12:17:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:50.546 12:17:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:50.546 12:17:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:50.546 ************************************ 00:18:50.546 START TEST raid_rebuild_test_sb 00:18:50.546 ************************************ 00:18:50.546 12:17:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:18:50.546 12:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:50.546 12:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:18:50.546 12:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:50.546 12:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:50.546 12:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:50.546 12:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:50.546 12:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:50.546 12:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:50.546 12:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:50.546 12:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:50.546 12:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:50.546 12:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:50.546 12:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:50.546 12:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:18:50.546 12:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:50.546 12:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:50.546 12:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:18:50.546 12:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:50.546 12:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:50.546 12:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:50.546 12:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:50.546 12:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:50.546 12:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:50.546 12:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:50.546 12:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:50.546 12:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:50.546 12:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:50.546 12:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:50.546 12:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:50.546 12:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:50.546 12:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78277 00:18:50.546 12:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78277 00:18:50.546 12:17:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 78277 ']' 00:18:50.546 12:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:50.546 12:17:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:50.546 12:17:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:50.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:50.546 12:17:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:50.546 12:17:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:50.546 12:17:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.546 [2024-11-25 12:17:46.507749] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:18:50.546 [2024-11-25 12:17:46.508592] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78277 ] 00:18:50.546 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:50.546 Zero copy mechanism will not be used. 00:18:50.805 [2024-11-25 12:17:46.689725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.805 [2024-11-25 12:17:46.834241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:51.062 [2024-11-25 12:17:47.064982] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:51.062 [2024-11-25 12:17:47.065051] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:51.630 12:17:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:51.630 12:17:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:18:51.630 12:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:51.630 12:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:51.630 12:17:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.630 12:17:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.630 BaseBdev1_malloc 00:18:51.630 12:17:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.630 12:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:51.630 12:17:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.630 12:17:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.630 [2024-11-25 12:17:47.612455] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:51.630 [2024-11-25 12:17:47.612566] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:51.630 [2024-11-25 12:17:47.612601] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:51.630 [2024-11-25 12:17:47.612619] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:51.630 [2024-11-25 12:17:47.615578] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:51.630 [2024-11-25 12:17:47.615622] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:51.630 BaseBdev1 00:18:51.630 12:17:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.630 12:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:51.630 12:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:51.630 12:17:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.630 12:17:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.630 BaseBdev2_malloc 00:18:51.630 12:17:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.630 12:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:51.630 12:17:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.630 12:17:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.630 [2024-11-25 12:17:47.662288] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:51.630 [2024-11-25 12:17:47.662387] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:51.630 [2024-11-25 12:17:47.662418] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:51.630 [2024-11-25 12:17:47.662440] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:51.630 [2024-11-25 12:17:47.665302] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:51.630 [2024-11-25 12:17:47.665403] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:51.630 BaseBdev2 00:18:51.630 12:17:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.630 12:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:51.630 12:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:51.630 12:17:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.630 12:17:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.889 BaseBdev3_malloc 00:18:51.889 12:17:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.889 12:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:18:51.889 12:17:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.889 12:17:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.889 [2024-11-25 12:17:47.732327] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:18:51.889 [2024-11-25 12:17:47.732434] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:51.889 [2024-11-25 12:17:47.732469] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:51.889 [2024-11-25 12:17:47.732488] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:51.889 [2024-11-25 12:17:47.735286] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:51.889 [2024-11-25 12:17:47.735331] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:51.889 BaseBdev3 00:18:51.889 12:17:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.889 12:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:51.889 12:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:51.889 12:17:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.889 12:17:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.889 BaseBdev4_malloc 00:18:51.889 12:17:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.889 12:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:18:51.889 12:17:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.889 12:17:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.889 [2024-11-25 12:17:47.794612] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:18:51.889 [2024-11-25 12:17:47.794727] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:51.889 [2024-11-25 12:17:47.794755] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:51.889 [2024-11-25 12:17:47.794772] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:51.889 [2024-11-25 12:17:47.798013] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:51.889 [2024-11-25 12:17:47.798066] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:51.889 BaseBdev4 00:18:51.889 12:17:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.889 12:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:51.889 12:17:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.889 12:17:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.889 spare_malloc 00:18:51.889 12:17:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.889 12:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:51.889 12:17:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.889 12:17:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.889 spare_delay 00:18:51.889 12:17:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.889 12:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:51.889 12:17:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.889 12:17:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.889 [2024-11-25 12:17:47.865618] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:51.889 [2024-11-25 12:17:47.865712] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:51.889 [2024-11-25 12:17:47.865771] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:51.889 [2024-11-25 12:17:47.865789] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:51.889 [2024-11-25 12:17:47.868732] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:51.889 [2024-11-25 12:17:47.868778] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:51.889 spare 00:18:51.889 12:17:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.889 12:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:18:51.889 12:17:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.889 12:17:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.889 [2024-11-25 12:17:47.873771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:51.889 [2024-11-25 12:17:47.876564] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:51.889 [2024-11-25 12:17:47.876666] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:51.889 [2024-11-25 12:17:47.876793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:51.889 [2024-11-25 12:17:47.877052] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:51.889 [2024-11-25 12:17:47.877088] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:51.889 [2024-11-25 12:17:47.877456] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:51.889 [2024-11-25 12:17:47.877751] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:51.889 [2024-11-25 12:17:47.877776] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:51.889 [2024-11-25 12:17:47.878019] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:51.889 12:17:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.889 12:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:18:51.889 12:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:51.890 12:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:51.890 12:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:51.890 12:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:51.890 12:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:51.890 12:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:51.890 12:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:51.890 12:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:51.890 12:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:51.890 12:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.890 12:17:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.890 12:17:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.890 12:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.890 12:17:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.890 12:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:51.890 "name": "raid_bdev1", 00:18:51.890 "uuid": "56433b9f-224f-4e8c-875e-176aa7ea907e", 00:18:51.890 "strip_size_kb": 0, 00:18:51.890 "state": "online", 00:18:51.890 "raid_level": "raid1", 00:18:51.890 "superblock": true, 00:18:51.890 "num_base_bdevs": 4, 00:18:51.890 "num_base_bdevs_discovered": 4, 00:18:51.890 "num_base_bdevs_operational": 4, 00:18:51.890 "base_bdevs_list": [ 00:18:51.890 { 00:18:51.890 "name": "BaseBdev1", 00:18:51.890 "uuid": "eb3b5b5f-b98e-5c46-92a3-5d613fee8157", 00:18:51.890 "is_configured": true, 00:18:51.890 "data_offset": 2048, 00:18:51.890 "data_size": 63488 00:18:51.890 }, 00:18:51.890 { 00:18:51.890 "name": "BaseBdev2", 00:18:51.890 "uuid": "07901b70-9227-56b1-9e79-545acbcc35f7", 00:18:51.890 "is_configured": true, 00:18:51.890 "data_offset": 2048, 00:18:51.890 "data_size": 63488 00:18:51.890 }, 00:18:51.890 { 00:18:51.890 "name": "BaseBdev3", 00:18:51.890 "uuid": "d0ef5c48-b1dc-50b6-99ac-b9bdfcf84274", 00:18:51.890 "is_configured": true, 00:18:51.890 "data_offset": 2048, 00:18:51.890 "data_size": 63488 00:18:51.890 }, 00:18:51.890 { 00:18:51.890 "name": "BaseBdev4", 00:18:51.890 "uuid": "d624b5da-19ed-561e-b904-d72fd10442fd", 00:18:51.890 "is_configured": true, 00:18:51.890 "data_offset": 2048, 00:18:51.890 "data_size": 63488 00:18:51.890 } 00:18:51.890 ] 00:18:51.890 }' 00:18:51.890 12:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:51.890 12:17:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.456 12:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:52.456 12:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:52.456 12:17:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.456 12:17:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.456 [2024-11-25 12:17:48.374721] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:52.456 12:17:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.456 12:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:18:52.456 12:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:52.456 12:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.456 12:17:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.456 12:17:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.456 12:17:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.456 12:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:18:52.456 12:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:52.456 12:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:52.456 12:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:52.456 12:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:52.456 12:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:52.456 12:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:52.456 12:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:52.456 12:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:52.456 12:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:52.456 12:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:18:52.456 12:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:52.456 12:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:52.456 12:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:52.715 [2024-11-25 12:17:48.706391] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:52.715 /dev/nbd0 00:18:52.715 12:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:52.715 12:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:52.715 12:17:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:52.715 12:17:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:18:52.715 12:17:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:52.715 12:17:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:52.715 12:17:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:52.715 12:17:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:18:52.715 12:17:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:52.715 12:17:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:52.715 12:17:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:52.715 1+0 records in 00:18:52.715 1+0 records out 00:18:52.715 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000345213 s, 11.9 MB/s 00:18:52.715 12:17:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:52.715 12:17:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:18:52.715 12:17:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:52.715 12:17:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:52.715 12:17:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:18:52.715 12:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:52.715 12:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:52.715 12:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:18:52.715 12:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:18:52.715 12:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:19:02.680 63488+0 records in 00:19:02.680 63488+0 records out 00:19:02.680 32505856 bytes (33 MB, 31 MiB) copied, 8.24351 s, 3.9 MB/s 00:19:02.680 12:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:02.680 12:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:02.680 12:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:02.680 12:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:02.680 12:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:19:02.680 12:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:02.681 12:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:02.681 [2024-11-25 12:17:57.252801] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:02.681 12:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:02.681 12:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:02.681 12:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:02.681 12:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:02.681 12:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:02.681 12:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:02.681 12:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:02.681 12:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:02.681 12:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:02.681 12:17:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.681 12:17:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:02.681 [2024-11-25 12:17:57.280886] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:02.681 12:17:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.681 12:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:02.681 12:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:02.681 12:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:02.681 12:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:02.681 12:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:02.681 12:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:02.681 12:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:02.681 12:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:02.681 12:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:02.681 12:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:02.681 12:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.681 12:17:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.681 12:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.681 12:17:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:02.681 12:17:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.681 12:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:02.681 "name": "raid_bdev1", 00:19:02.681 "uuid": "56433b9f-224f-4e8c-875e-176aa7ea907e", 00:19:02.681 "strip_size_kb": 0, 00:19:02.681 "state": "online", 00:19:02.681 "raid_level": "raid1", 00:19:02.681 "superblock": true, 00:19:02.681 "num_base_bdevs": 4, 00:19:02.681 "num_base_bdevs_discovered": 3, 00:19:02.681 "num_base_bdevs_operational": 3, 00:19:02.681 "base_bdevs_list": [ 00:19:02.681 { 00:19:02.681 "name": null, 00:19:02.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.681 "is_configured": false, 00:19:02.681 "data_offset": 0, 00:19:02.681 "data_size": 63488 00:19:02.681 }, 00:19:02.681 { 00:19:02.681 "name": "BaseBdev2", 00:19:02.681 "uuid": "07901b70-9227-56b1-9e79-545acbcc35f7", 00:19:02.681 "is_configured": true, 00:19:02.681 "data_offset": 2048, 00:19:02.681 "data_size": 63488 00:19:02.681 }, 00:19:02.681 { 00:19:02.681 "name": "BaseBdev3", 00:19:02.681 "uuid": "d0ef5c48-b1dc-50b6-99ac-b9bdfcf84274", 00:19:02.681 "is_configured": true, 00:19:02.681 "data_offset": 2048, 00:19:02.681 "data_size": 63488 00:19:02.681 }, 00:19:02.681 { 00:19:02.681 "name": "BaseBdev4", 00:19:02.681 "uuid": "d624b5da-19ed-561e-b904-d72fd10442fd", 00:19:02.681 "is_configured": true, 00:19:02.681 "data_offset": 2048, 00:19:02.681 "data_size": 63488 00:19:02.681 } 00:19:02.681 ] 00:19:02.681 }' 00:19:02.681 12:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:02.681 12:17:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:02.681 12:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:02.681 12:17:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.681 12:17:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:02.681 [2024-11-25 12:17:57.741013] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:02.681 [2024-11-25 12:17:57.755263] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:19:02.681 12:17:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.681 12:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:02.681 [2024-11-25 12:17:57.757868] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:02.681 12:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:02.681 12:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:02.681 12:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:02.681 12:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:02.681 12:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:02.681 12:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.681 12:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.681 12:17:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.681 12:17:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:02.940 12:17:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.940 12:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:02.940 "name": "raid_bdev1", 00:19:02.940 "uuid": "56433b9f-224f-4e8c-875e-176aa7ea907e", 00:19:02.940 "strip_size_kb": 0, 00:19:02.940 "state": "online", 00:19:02.940 "raid_level": "raid1", 00:19:02.940 "superblock": true, 00:19:02.940 "num_base_bdevs": 4, 00:19:02.940 "num_base_bdevs_discovered": 4, 00:19:02.940 "num_base_bdevs_operational": 4, 00:19:02.940 "process": { 00:19:02.940 "type": "rebuild", 00:19:02.940 "target": "spare", 00:19:02.940 "progress": { 00:19:02.940 "blocks": 20480, 00:19:02.940 "percent": 32 00:19:02.940 } 00:19:02.940 }, 00:19:02.940 "base_bdevs_list": [ 00:19:02.940 { 00:19:02.940 "name": "spare", 00:19:02.940 "uuid": "258fe83a-91bc-5377-9b83-a0015162b093", 00:19:02.940 "is_configured": true, 00:19:02.940 "data_offset": 2048, 00:19:02.940 "data_size": 63488 00:19:02.940 }, 00:19:02.940 { 00:19:02.940 "name": "BaseBdev2", 00:19:02.940 "uuid": "07901b70-9227-56b1-9e79-545acbcc35f7", 00:19:02.940 "is_configured": true, 00:19:02.940 "data_offset": 2048, 00:19:02.940 "data_size": 63488 00:19:02.940 }, 00:19:02.940 { 00:19:02.940 "name": "BaseBdev3", 00:19:02.940 "uuid": "d0ef5c48-b1dc-50b6-99ac-b9bdfcf84274", 00:19:02.940 "is_configured": true, 00:19:02.940 "data_offset": 2048, 00:19:02.941 "data_size": 63488 00:19:02.941 }, 00:19:02.941 { 00:19:02.941 "name": "BaseBdev4", 00:19:02.941 "uuid": "d624b5da-19ed-561e-b904-d72fd10442fd", 00:19:02.941 "is_configured": true, 00:19:02.941 "data_offset": 2048, 00:19:02.941 "data_size": 63488 00:19:02.941 } 00:19:02.941 ] 00:19:02.941 }' 00:19:02.941 12:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:02.941 12:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:02.941 12:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:02.941 12:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:02.941 12:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:02.941 12:17:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.941 12:17:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:02.941 [2024-11-25 12:17:58.907181] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:02.941 [2024-11-25 12:17:58.967174] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:02.941 [2024-11-25 12:17:58.967254] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:02.941 [2024-11-25 12:17:58.967280] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:02.941 [2024-11-25 12:17:58.967306] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:02.941 12:17:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.941 12:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:02.941 12:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:02.941 12:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:02.941 12:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:02.941 12:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:02.941 12:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:02.941 12:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:02.941 12:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:02.941 12:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:02.941 12:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:02.941 12:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.941 12:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.941 12:17:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.941 12:17:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:02.941 12:17:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.199 12:17:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:03.199 "name": "raid_bdev1", 00:19:03.199 "uuid": "56433b9f-224f-4e8c-875e-176aa7ea907e", 00:19:03.199 "strip_size_kb": 0, 00:19:03.199 "state": "online", 00:19:03.199 "raid_level": "raid1", 00:19:03.199 "superblock": true, 00:19:03.199 "num_base_bdevs": 4, 00:19:03.199 "num_base_bdevs_discovered": 3, 00:19:03.199 "num_base_bdevs_operational": 3, 00:19:03.199 "base_bdevs_list": [ 00:19:03.199 { 00:19:03.199 "name": null, 00:19:03.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.199 "is_configured": false, 00:19:03.199 "data_offset": 0, 00:19:03.199 "data_size": 63488 00:19:03.199 }, 00:19:03.199 { 00:19:03.199 "name": "BaseBdev2", 00:19:03.199 "uuid": "07901b70-9227-56b1-9e79-545acbcc35f7", 00:19:03.199 "is_configured": true, 00:19:03.199 "data_offset": 2048, 00:19:03.199 "data_size": 63488 00:19:03.199 }, 00:19:03.199 { 00:19:03.199 "name": "BaseBdev3", 00:19:03.200 "uuid": "d0ef5c48-b1dc-50b6-99ac-b9bdfcf84274", 00:19:03.200 "is_configured": true, 00:19:03.200 "data_offset": 2048, 00:19:03.200 "data_size": 63488 00:19:03.200 }, 00:19:03.200 { 00:19:03.200 "name": "BaseBdev4", 00:19:03.200 "uuid": "d624b5da-19ed-561e-b904-d72fd10442fd", 00:19:03.200 "is_configured": true, 00:19:03.200 "data_offset": 2048, 00:19:03.200 "data_size": 63488 00:19:03.200 } 00:19:03.200 ] 00:19:03.200 }' 00:19:03.200 12:17:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:03.200 12:17:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.458 12:17:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:03.458 12:17:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:03.458 12:17:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:03.458 12:17:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:03.458 12:17:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:03.458 12:17:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.458 12:17:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.458 12:17:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.458 12:17:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.458 12:17:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.458 12:17:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:03.458 "name": "raid_bdev1", 00:19:03.458 "uuid": "56433b9f-224f-4e8c-875e-176aa7ea907e", 00:19:03.458 "strip_size_kb": 0, 00:19:03.458 "state": "online", 00:19:03.458 "raid_level": "raid1", 00:19:03.458 "superblock": true, 00:19:03.458 "num_base_bdevs": 4, 00:19:03.458 "num_base_bdevs_discovered": 3, 00:19:03.458 "num_base_bdevs_operational": 3, 00:19:03.458 "base_bdevs_list": [ 00:19:03.458 { 00:19:03.458 "name": null, 00:19:03.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.458 "is_configured": false, 00:19:03.458 "data_offset": 0, 00:19:03.458 "data_size": 63488 00:19:03.458 }, 00:19:03.458 { 00:19:03.458 "name": "BaseBdev2", 00:19:03.458 "uuid": "07901b70-9227-56b1-9e79-545acbcc35f7", 00:19:03.458 "is_configured": true, 00:19:03.458 "data_offset": 2048, 00:19:03.458 "data_size": 63488 00:19:03.458 }, 00:19:03.458 { 00:19:03.458 "name": "BaseBdev3", 00:19:03.458 "uuid": "d0ef5c48-b1dc-50b6-99ac-b9bdfcf84274", 00:19:03.458 "is_configured": true, 00:19:03.458 "data_offset": 2048, 00:19:03.458 "data_size": 63488 00:19:03.458 }, 00:19:03.458 { 00:19:03.458 "name": "BaseBdev4", 00:19:03.458 "uuid": "d624b5da-19ed-561e-b904-d72fd10442fd", 00:19:03.458 "is_configured": true, 00:19:03.458 "data_offset": 2048, 00:19:03.458 "data_size": 63488 00:19:03.458 } 00:19:03.458 ] 00:19:03.458 }' 00:19:03.458 12:17:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:03.458 12:17:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:03.458 12:17:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:03.716 12:17:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:03.717 12:17:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:03.717 12:17:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.717 12:17:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.717 [2024-11-25 12:17:59.595419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:03.717 [2024-11-25 12:17:59.609127] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:19:03.717 12:17:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.717 12:17:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:03.717 [2024-11-25 12:17:59.611700] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:04.652 12:18:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:04.652 12:18:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:04.652 12:18:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:04.652 12:18:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:04.652 12:18:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:04.652 12:18:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.652 12:18:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.652 12:18:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:04.652 12:18:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.652 12:18:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.652 12:18:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:04.652 "name": "raid_bdev1", 00:19:04.652 "uuid": "56433b9f-224f-4e8c-875e-176aa7ea907e", 00:19:04.652 "strip_size_kb": 0, 00:19:04.652 "state": "online", 00:19:04.652 "raid_level": "raid1", 00:19:04.652 "superblock": true, 00:19:04.652 "num_base_bdevs": 4, 00:19:04.652 "num_base_bdevs_discovered": 4, 00:19:04.652 "num_base_bdevs_operational": 4, 00:19:04.652 "process": { 00:19:04.652 "type": "rebuild", 00:19:04.652 "target": "spare", 00:19:04.652 "progress": { 00:19:04.652 "blocks": 20480, 00:19:04.652 "percent": 32 00:19:04.652 } 00:19:04.652 }, 00:19:04.652 "base_bdevs_list": [ 00:19:04.652 { 00:19:04.652 "name": "spare", 00:19:04.652 "uuid": "258fe83a-91bc-5377-9b83-a0015162b093", 00:19:04.652 "is_configured": true, 00:19:04.652 "data_offset": 2048, 00:19:04.652 "data_size": 63488 00:19:04.652 }, 00:19:04.652 { 00:19:04.652 "name": "BaseBdev2", 00:19:04.652 "uuid": "07901b70-9227-56b1-9e79-545acbcc35f7", 00:19:04.652 "is_configured": true, 00:19:04.652 "data_offset": 2048, 00:19:04.652 "data_size": 63488 00:19:04.652 }, 00:19:04.652 { 00:19:04.652 "name": "BaseBdev3", 00:19:04.652 "uuid": "d0ef5c48-b1dc-50b6-99ac-b9bdfcf84274", 00:19:04.652 "is_configured": true, 00:19:04.652 "data_offset": 2048, 00:19:04.652 "data_size": 63488 00:19:04.652 }, 00:19:04.652 { 00:19:04.652 "name": "BaseBdev4", 00:19:04.652 "uuid": "d624b5da-19ed-561e-b904-d72fd10442fd", 00:19:04.652 "is_configured": true, 00:19:04.652 "data_offset": 2048, 00:19:04.652 "data_size": 63488 00:19:04.652 } 00:19:04.652 ] 00:19:04.652 }' 00:19:04.652 12:18:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:04.911 12:18:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:04.911 12:18:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:04.911 12:18:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:04.911 12:18:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:04.911 12:18:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:04.911 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:04.911 12:18:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:19:04.911 12:18:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:04.911 12:18:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:19:04.911 12:18:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:19:04.911 12:18:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.911 12:18:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:04.911 [2024-11-25 12:18:00.805102] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:04.911 [2024-11-25 12:18:00.920419] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:19:04.911 12:18:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.911 12:18:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:19:04.911 12:18:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:19:04.911 12:18:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:04.911 12:18:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:04.911 12:18:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:04.911 12:18:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:04.911 12:18:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:04.911 12:18:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.911 12:18:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.911 12:18:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.911 12:18:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:04.911 12:18:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.911 12:18:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:04.911 "name": "raid_bdev1", 00:19:04.911 "uuid": "56433b9f-224f-4e8c-875e-176aa7ea907e", 00:19:04.911 "strip_size_kb": 0, 00:19:04.911 "state": "online", 00:19:04.911 "raid_level": "raid1", 00:19:04.911 "superblock": true, 00:19:04.911 "num_base_bdevs": 4, 00:19:04.911 "num_base_bdevs_discovered": 3, 00:19:04.911 "num_base_bdevs_operational": 3, 00:19:04.911 "process": { 00:19:04.911 "type": "rebuild", 00:19:04.911 "target": "spare", 00:19:04.911 "progress": { 00:19:04.911 "blocks": 24576, 00:19:04.911 "percent": 38 00:19:04.911 } 00:19:04.911 }, 00:19:04.911 "base_bdevs_list": [ 00:19:04.911 { 00:19:04.911 "name": "spare", 00:19:04.911 "uuid": "258fe83a-91bc-5377-9b83-a0015162b093", 00:19:04.911 "is_configured": true, 00:19:04.911 "data_offset": 2048, 00:19:04.911 "data_size": 63488 00:19:04.911 }, 00:19:04.911 { 00:19:04.911 "name": null, 00:19:04.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:04.911 "is_configured": false, 00:19:04.911 "data_offset": 0, 00:19:04.911 "data_size": 63488 00:19:04.911 }, 00:19:04.911 { 00:19:04.911 "name": "BaseBdev3", 00:19:04.911 "uuid": "d0ef5c48-b1dc-50b6-99ac-b9bdfcf84274", 00:19:04.911 "is_configured": true, 00:19:04.911 "data_offset": 2048, 00:19:04.911 "data_size": 63488 00:19:04.911 }, 00:19:04.911 { 00:19:04.911 "name": "BaseBdev4", 00:19:04.911 "uuid": "d624b5da-19ed-561e-b904-d72fd10442fd", 00:19:04.911 "is_configured": true, 00:19:04.911 "data_offset": 2048, 00:19:04.911 "data_size": 63488 00:19:04.911 } 00:19:04.911 ] 00:19:04.911 }' 00:19:04.911 12:18:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:05.170 12:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:05.170 12:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:05.170 12:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:05.170 12:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=500 00:19:05.170 12:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:05.170 12:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:05.170 12:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:05.170 12:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:05.170 12:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:05.170 12:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:05.170 12:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.170 12:18:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.170 12:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.170 12:18:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.170 12:18:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.170 12:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:05.170 "name": "raid_bdev1", 00:19:05.170 "uuid": "56433b9f-224f-4e8c-875e-176aa7ea907e", 00:19:05.170 "strip_size_kb": 0, 00:19:05.170 "state": "online", 00:19:05.170 "raid_level": "raid1", 00:19:05.170 "superblock": true, 00:19:05.170 "num_base_bdevs": 4, 00:19:05.170 "num_base_bdevs_discovered": 3, 00:19:05.170 "num_base_bdevs_operational": 3, 00:19:05.170 "process": { 00:19:05.170 "type": "rebuild", 00:19:05.170 "target": "spare", 00:19:05.170 "progress": { 00:19:05.170 "blocks": 26624, 00:19:05.170 "percent": 41 00:19:05.170 } 00:19:05.170 }, 00:19:05.170 "base_bdevs_list": [ 00:19:05.170 { 00:19:05.170 "name": "spare", 00:19:05.170 "uuid": "258fe83a-91bc-5377-9b83-a0015162b093", 00:19:05.170 "is_configured": true, 00:19:05.170 "data_offset": 2048, 00:19:05.170 "data_size": 63488 00:19:05.170 }, 00:19:05.170 { 00:19:05.170 "name": null, 00:19:05.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.170 "is_configured": false, 00:19:05.170 "data_offset": 0, 00:19:05.170 "data_size": 63488 00:19:05.170 }, 00:19:05.170 { 00:19:05.170 "name": "BaseBdev3", 00:19:05.170 "uuid": "d0ef5c48-b1dc-50b6-99ac-b9bdfcf84274", 00:19:05.170 "is_configured": true, 00:19:05.170 "data_offset": 2048, 00:19:05.170 "data_size": 63488 00:19:05.170 }, 00:19:05.170 { 00:19:05.170 "name": "BaseBdev4", 00:19:05.170 "uuid": "d624b5da-19ed-561e-b904-d72fd10442fd", 00:19:05.170 "is_configured": true, 00:19:05.170 "data_offset": 2048, 00:19:05.170 "data_size": 63488 00:19:05.170 } 00:19:05.170 ] 00:19:05.170 }' 00:19:05.170 12:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:05.170 12:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:05.170 12:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:05.170 12:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:05.170 12:18:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:06.545 12:18:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:06.545 12:18:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:06.545 12:18:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:06.545 12:18:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:06.545 12:18:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:06.545 12:18:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:06.545 12:18:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.545 12:18:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.545 12:18:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.545 12:18:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.545 12:18:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.545 12:18:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:06.545 "name": "raid_bdev1", 00:19:06.545 "uuid": "56433b9f-224f-4e8c-875e-176aa7ea907e", 00:19:06.545 "strip_size_kb": 0, 00:19:06.545 "state": "online", 00:19:06.545 "raid_level": "raid1", 00:19:06.545 "superblock": true, 00:19:06.545 "num_base_bdevs": 4, 00:19:06.545 "num_base_bdevs_discovered": 3, 00:19:06.545 "num_base_bdevs_operational": 3, 00:19:06.545 "process": { 00:19:06.545 "type": "rebuild", 00:19:06.545 "target": "spare", 00:19:06.545 "progress": { 00:19:06.545 "blocks": 51200, 00:19:06.545 "percent": 80 00:19:06.545 } 00:19:06.545 }, 00:19:06.545 "base_bdevs_list": [ 00:19:06.545 { 00:19:06.545 "name": "spare", 00:19:06.545 "uuid": "258fe83a-91bc-5377-9b83-a0015162b093", 00:19:06.545 "is_configured": true, 00:19:06.545 "data_offset": 2048, 00:19:06.545 "data_size": 63488 00:19:06.545 }, 00:19:06.545 { 00:19:06.545 "name": null, 00:19:06.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.545 "is_configured": false, 00:19:06.545 "data_offset": 0, 00:19:06.545 "data_size": 63488 00:19:06.545 }, 00:19:06.545 { 00:19:06.545 "name": "BaseBdev3", 00:19:06.545 "uuid": "d0ef5c48-b1dc-50b6-99ac-b9bdfcf84274", 00:19:06.545 "is_configured": true, 00:19:06.545 "data_offset": 2048, 00:19:06.545 "data_size": 63488 00:19:06.545 }, 00:19:06.545 { 00:19:06.545 "name": "BaseBdev4", 00:19:06.545 "uuid": "d624b5da-19ed-561e-b904-d72fd10442fd", 00:19:06.545 "is_configured": true, 00:19:06.545 "data_offset": 2048, 00:19:06.545 "data_size": 63488 00:19:06.545 } 00:19:06.545 ] 00:19:06.545 }' 00:19:06.545 12:18:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:06.545 12:18:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:06.545 12:18:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:06.545 12:18:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:06.545 12:18:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:06.803 [2024-11-25 12:18:02.834679] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:06.803 [2024-11-25 12:18:02.834805] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:06.803 [2024-11-25 12:18:02.835003] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:07.370 12:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:07.370 12:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:07.370 12:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:07.370 12:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:07.370 12:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:07.370 12:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:07.370 12:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.370 12:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.370 12:18:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.370 12:18:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:07.370 12:18:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.370 12:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:07.370 "name": "raid_bdev1", 00:19:07.370 "uuid": "56433b9f-224f-4e8c-875e-176aa7ea907e", 00:19:07.370 "strip_size_kb": 0, 00:19:07.370 "state": "online", 00:19:07.370 "raid_level": "raid1", 00:19:07.370 "superblock": true, 00:19:07.370 "num_base_bdevs": 4, 00:19:07.370 "num_base_bdevs_discovered": 3, 00:19:07.370 "num_base_bdevs_operational": 3, 00:19:07.370 "base_bdevs_list": [ 00:19:07.370 { 00:19:07.370 "name": "spare", 00:19:07.370 "uuid": "258fe83a-91bc-5377-9b83-a0015162b093", 00:19:07.370 "is_configured": true, 00:19:07.370 "data_offset": 2048, 00:19:07.370 "data_size": 63488 00:19:07.370 }, 00:19:07.370 { 00:19:07.370 "name": null, 00:19:07.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.370 "is_configured": false, 00:19:07.370 "data_offset": 0, 00:19:07.370 "data_size": 63488 00:19:07.370 }, 00:19:07.370 { 00:19:07.370 "name": "BaseBdev3", 00:19:07.370 "uuid": "d0ef5c48-b1dc-50b6-99ac-b9bdfcf84274", 00:19:07.370 "is_configured": true, 00:19:07.370 "data_offset": 2048, 00:19:07.370 "data_size": 63488 00:19:07.370 }, 00:19:07.370 { 00:19:07.370 "name": "BaseBdev4", 00:19:07.370 "uuid": "d624b5da-19ed-561e-b904-d72fd10442fd", 00:19:07.370 "is_configured": true, 00:19:07.370 "data_offset": 2048, 00:19:07.370 "data_size": 63488 00:19:07.371 } 00:19:07.371 ] 00:19:07.371 }' 00:19:07.371 12:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:07.629 12:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:07.629 12:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:07.629 12:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:07.629 12:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:19:07.629 12:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:07.629 12:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:07.629 12:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:07.629 12:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:07.629 12:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:07.629 12:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.629 12:18:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.629 12:18:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:07.629 12:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.629 12:18:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.629 12:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:07.629 "name": "raid_bdev1", 00:19:07.629 "uuid": "56433b9f-224f-4e8c-875e-176aa7ea907e", 00:19:07.629 "strip_size_kb": 0, 00:19:07.629 "state": "online", 00:19:07.629 "raid_level": "raid1", 00:19:07.629 "superblock": true, 00:19:07.629 "num_base_bdevs": 4, 00:19:07.629 "num_base_bdevs_discovered": 3, 00:19:07.629 "num_base_bdevs_operational": 3, 00:19:07.629 "base_bdevs_list": [ 00:19:07.629 { 00:19:07.629 "name": "spare", 00:19:07.629 "uuid": "258fe83a-91bc-5377-9b83-a0015162b093", 00:19:07.629 "is_configured": true, 00:19:07.629 "data_offset": 2048, 00:19:07.629 "data_size": 63488 00:19:07.629 }, 00:19:07.629 { 00:19:07.629 "name": null, 00:19:07.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.629 "is_configured": false, 00:19:07.629 "data_offset": 0, 00:19:07.629 "data_size": 63488 00:19:07.629 }, 00:19:07.629 { 00:19:07.629 "name": "BaseBdev3", 00:19:07.629 "uuid": "d0ef5c48-b1dc-50b6-99ac-b9bdfcf84274", 00:19:07.629 "is_configured": true, 00:19:07.629 "data_offset": 2048, 00:19:07.629 "data_size": 63488 00:19:07.629 }, 00:19:07.629 { 00:19:07.629 "name": "BaseBdev4", 00:19:07.629 "uuid": "d624b5da-19ed-561e-b904-d72fd10442fd", 00:19:07.629 "is_configured": true, 00:19:07.629 "data_offset": 2048, 00:19:07.629 "data_size": 63488 00:19:07.629 } 00:19:07.629 ] 00:19:07.629 }' 00:19:07.629 12:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:07.629 12:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:07.629 12:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:07.629 12:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:07.629 12:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:07.629 12:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:07.629 12:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:07.629 12:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:07.629 12:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:07.629 12:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:07.629 12:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:07.629 12:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:07.629 12:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:07.629 12:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:07.630 12:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.630 12:18:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.630 12:18:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:07.630 12:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.888 12:18:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.888 12:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:07.888 "name": "raid_bdev1", 00:19:07.888 "uuid": "56433b9f-224f-4e8c-875e-176aa7ea907e", 00:19:07.888 "strip_size_kb": 0, 00:19:07.888 "state": "online", 00:19:07.888 "raid_level": "raid1", 00:19:07.888 "superblock": true, 00:19:07.888 "num_base_bdevs": 4, 00:19:07.888 "num_base_bdevs_discovered": 3, 00:19:07.889 "num_base_bdevs_operational": 3, 00:19:07.889 "base_bdevs_list": [ 00:19:07.889 { 00:19:07.889 "name": "spare", 00:19:07.889 "uuid": "258fe83a-91bc-5377-9b83-a0015162b093", 00:19:07.889 "is_configured": true, 00:19:07.889 "data_offset": 2048, 00:19:07.889 "data_size": 63488 00:19:07.889 }, 00:19:07.889 { 00:19:07.889 "name": null, 00:19:07.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.889 "is_configured": false, 00:19:07.889 "data_offset": 0, 00:19:07.889 "data_size": 63488 00:19:07.889 }, 00:19:07.889 { 00:19:07.889 "name": "BaseBdev3", 00:19:07.889 "uuid": "d0ef5c48-b1dc-50b6-99ac-b9bdfcf84274", 00:19:07.889 "is_configured": true, 00:19:07.889 "data_offset": 2048, 00:19:07.889 "data_size": 63488 00:19:07.889 }, 00:19:07.889 { 00:19:07.889 "name": "BaseBdev4", 00:19:07.889 "uuid": "d624b5da-19ed-561e-b904-d72fd10442fd", 00:19:07.889 "is_configured": true, 00:19:07.889 "data_offset": 2048, 00:19:07.889 "data_size": 63488 00:19:07.889 } 00:19:07.889 ] 00:19:07.889 }' 00:19:07.889 12:18:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:07.889 12:18:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.148 12:18:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:08.148 12:18:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.148 12:18:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.148 [2024-11-25 12:18:04.222963] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:08.148 [2024-11-25 12:18:04.223140] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:08.148 [2024-11-25 12:18:04.223383] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:08.148 [2024-11-25 12:18:04.223611] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:08.148 [2024-11-25 12:18:04.223640] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:08.148 12:18:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.148 12:18:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:19:08.148 12:18:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.148 12:18:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.148 12:18:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.407 12:18:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.407 12:18:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:08.407 12:18:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:08.407 12:18:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:08.407 12:18:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:08.407 12:18:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:08.407 12:18:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:08.407 12:18:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:08.407 12:18:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:08.407 12:18:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:08.407 12:18:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:19:08.407 12:18:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:08.407 12:18:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:08.407 12:18:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:08.665 /dev/nbd0 00:19:08.665 12:18:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:08.665 12:18:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:08.665 12:18:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:08.665 12:18:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:19:08.665 12:18:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:08.665 12:18:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:08.665 12:18:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:08.665 12:18:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:19:08.665 12:18:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:08.665 12:18:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:08.665 12:18:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:08.665 1+0 records in 00:19:08.665 1+0 records out 00:19:08.665 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000300849 s, 13.6 MB/s 00:19:08.665 12:18:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:08.665 12:18:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:19:08.665 12:18:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:08.665 12:18:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:08.665 12:18:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:19:08.665 12:18:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:08.665 12:18:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:08.665 12:18:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:08.923 /dev/nbd1 00:19:08.923 12:18:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:08.923 12:18:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:08.923 12:18:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:08.923 12:18:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:19:08.924 12:18:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:08.924 12:18:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:08.924 12:18:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:08.924 12:18:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:19:08.924 12:18:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:08.924 12:18:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:08.924 12:18:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:08.924 1+0 records in 00:19:08.924 1+0 records out 00:19:08.924 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0003753 s, 10.9 MB/s 00:19:08.924 12:18:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:08.924 12:18:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:19:08.924 12:18:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:08.924 12:18:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:08.924 12:18:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:19:08.924 12:18:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:08.924 12:18:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:08.924 12:18:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:09.182 12:18:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:09.182 12:18:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:09.182 12:18:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:09.182 12:18:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:09.182 12:18:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:19:09.182 12:18:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:09.182 12:18:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:09.441 12:18:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:09.441 12:18:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:09.441 12:18:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:09.441 12:18:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:09.441 12:18:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:09.441 12:18:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:09.441 12:18:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:09.441 12:18:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:09.441 12:18:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:09.441 12:18:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:09.699 12:18:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:09.699 12:18:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:09.699 12:18:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:09.699 12:18:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:09.699 12:18:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:09.699 12:18:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:09.699 12:18:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:09.699 12:18:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:09.699 12:18:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:09.699 12:18:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:09.699 12:18:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.699 12:18:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.699 12:18:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.699 12:18:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:09.699 12:18:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.699 12:18:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.699 [2024-11-25 12:18:05.718066] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:09.699 [2024-11-25 12:18:05.718132] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:09.699 [2024-11-25 12:18:05.718166] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:19:09.699 [2024-11-25 12:18:05.718181] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:09.699 [2024-11-25 12:18:05.721243] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:09.699 [2024-11-25 12:18:05.721291] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:09.699 [2024-11-25 12:18:05.721451] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:09.699 [2024-11-25 12:18:05.721523] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:09.699 [2024-11-25 12:18:05.721703] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:09.699 [2024-11-25 12:18:05.721858] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:09.699 spare 00:19:09.699 12:18:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.699 12:18:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:09.699 12:18:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.699 12:18:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.957 [2024-11-25 12:18:05.822000] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:09.958 [2024-11-25 12:18:05.822050] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:09.958 [2024-11-25 12:18:05.822558] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:19:09.958 [2024-11-25 12:18:05.822823] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:09.958 [2024-11-25 12:18:05.822847] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:09.958 [2024-11-25 12:18:05.823093] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:09.958 12:18:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.958 12:18:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:09.958 12:18:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:09.958 12:18:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:09.958 12:18:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:09.958 12:18:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:09.958 12:18:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:09.958 12:18:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:09.958 12:18:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:09.958 12:18:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:09.958 12:18:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:09.958 12:18:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.958 12:18:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.958 12:18:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.958 12:18:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.958 12:18:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.958 12:18:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:09.958 "name": "raid_bdev1", 00:19:09.958 "uuid": "56433b9f-224f-4e8c-875e-176aa7ea907e", 00:19:09.958 "strip_size_kb": 0, 00:19:09.958 "state": "online", 00:19:09.958 "raid_level": "raid1", 00:19:09.958 "superblock": true, 00:19:09.958 "num_base_bdevs": 4, 00:19:09.958 "num_base_bdevs_discovered": 3, 00:19:09.958 "num_base_bdevs_operational": 3, 00:19:09.958 "base_bdevs_list": [ 00:19:09.958 { 00:19:09.958 "name": "spare", 00:19:09.958 "uuid": "258fe83a-91bc-5377-9b83-a0015162b093", 00:19:09.958 "is_configured": true, 00:19:09.958 "data_offset": 2048, 00:19:09.958 "data_size": 63488 00:19:09.958 }, 00:19:09.958 { 00:19:09.958 "name": null, 00:19:09.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.958 "is_configured": false, 00:19:09.958 "data_offset": 2048, 00:19:09.958 "data_size": 63488 00:19:09.958 }, 00:19:09.958 { 00:19:09.958 "name": "BaseBdev3", 00:19:09.958 "uuid": "d0ef5c48-b1dc-50b6-99ac-b9bdfcf84274", 00:19:09.958 "is_configured": true, 00:19:09.958 "data_offset": 2048, 00:19:09.958 "data_size": 63488 00:19:09.958 }, 00:19:09.958 { 00:19:09.958 "name": "BaseBdev4", 00:19:09.958 "uuid": "d624b5da-19ed-561e-b904-d72fd10442fd", 00:19:09.958 "is_configured": true, 00:19:09.958 "data_offset": 2048, 00:19:09.958 "data_size": 63488 00:19:09.958 } 00:19:09.958 ] 00:19:09.958 }' 00:19:09.958 12:18:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:09.958 12:18:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.525 12:18:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:10.525 12:18:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:10.525 12:18:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:10.525 12:18:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:10.525 12:18:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:10.525 12:18:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.525 12:18:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:10.525 12:18:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.525 12:18:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.525 12:18:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.525 12:18:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:10.525 "name": "raid_bdev1", 00:19:10.525 "uuid": "56433b9f-224f-4e8c-875e-176aa7ea907e", 00:19:10.525 "strip_size_kb": 0, 00:19:10.525 "state": "online", 00:19:10.525 "raid_level": "raid1", 00:19:10.525 "superblock": true, 00:19:10.525 "num_base_bdevs": 4, 00:19:10.525 "num_base_bdevs_discovered": 3, 00:19:10.525 "num_base_bdevs_operational": 3, 00:19:10.525 "base_bdevs_list": [ 00:19:10.525 { 00:19:10.525 "name": "spare", 00:19:10.525 "uuid": "258fe83a-91bc-5377-9b83-a0015162b093", 00:19:10.525 "is_configured": true, 00:19:10.525 "data_offset": 2048, 00:19:10.525 "data_size": 63488 00:19:10.525 }, 00:19:10.525 { 00:19:10.525 "name": null, 00:19:10.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.525 "is_configured": false, 00:19:10.525 "data_offset": 2048, 00:19:10.525 "data_size": 63488 00:19:10.525 }, 00:19:10.525 { 00:19:10.525 "name": "BaseBdev3", 00:19:10.525 "uuid": "d0ef5c48-b1dc-50b6-99ac-b9bdfcf84274", 00:19:10.525 "is_configured": true, 00:19:10.525 "data_offset": 2048, 00:19:10.525 "data_size": 63488 00:19:10.525 }, 00:19:10.525 { 00:19:10.525 "name": "BaseBdev4", 00:19:10.525 "uuid": "d624b5da-19ed-561e-b904-d72fd10442fd", 00:19:10.525 "is_configured": true, 00:19:10.525 "data_offset": 2048, 00:19:10.525 "data_size": 63488 00:19:10.525 } 00:19:10.525 ] 00:19:10.525 }' 00:19:10.525 12:18:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:10.525 12:18:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:10.525 12:18:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:10.525 12:18:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:10.525 12:18:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:10.525 12:18:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.525 12:18:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.525 12:18:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.525 12:18:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.525 12:18:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:10.525 12:18:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:10.525 12:18:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.525 12:18:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.525 [2024-11-25 12:18:06.535256] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:10.525 12:18:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.525 12:18:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:10.525 12:18:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:10.525 12:18:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:10.525 12:18:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:10.525 12:18:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:10.525 12:18:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:10.525 12:18:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:10.525 12:18:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:10.525 12:18:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:10.525 12:18:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:10.525 12:18:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.525 12:18:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:10.525 12:18:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.525 12:18:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.525 12:18:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.525 12:18:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:10.525 "name": "raid_bdev1", 00:19:10.525 "uuid": "56433b9f-224f-4e8c-875e-176aa7ea907e", 00:19:10.525 "strip_size_kb": 0, 00:19:10.525 "state": "online", 00:19:10.525 "raid_level": "raid1", 00:19:10.525 "superblock": true, 00:19:10.525 "num_base_bdevs": 4, 00:19:10.525 "num_base_bdevs_discovered": 2, 00:19:10.525 "num_base_bdevs_operational": 2, 00:19:10.525 "base_bdevs_list": [ 00:19:10.525 { 00:19:10.525 "name": null, 00:19:10.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.525 "is_configured": false, 00:19:10.525 "data_offset": 0, 00:19:10.525 "data_size": 63488 00:19:10.525 }, 00:19:10.525 { 00:19:10.525 "name": null, 00:19:10.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.525 "is_configured": false, 00:19:10.525 "data_offset": 2048, 00:19:10.525 "data_size": 63488 00:19:10.525 }, 00:19:10.525 { 00:19:10.525 "name": "BaseBdev3", 00:19:10.525 "uuid": "d0ef5c48-b1dc-50b6-99ac-b9bdfcf84274", 00:19:10.525 "is_configured": true, 00:19:10.525 "data_offset": 2048, 00:19:10.525 "data_size": 63488 00:19:10.525 }, 00:19:10.525 { 00:19:10.525 "name": "BaseBdev4", 00:19:10.525 "uuid": "d624b5da-19ed-561e-b904-d72fd10442fd", 00:19:10.525 "is_configured": true, 00:19:10.525 "data_offset": 2048, 00:19:10.525 "data_size": 63488 00:19:10.525 } 00:19:10.525 ] 00:19:10.525 }' 00:19:10.525 12:18:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:10.525 12:18:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.092 12:18:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:11.092 12:18:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.092 12:18:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.092 [2024-11-25 12:18:07.063437] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:11.092 [2024-11-25 12:18:07.063678] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:19:11.092 [2024-11-25 12:18:07.063704] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:11.092 [2024-11-25 12:18:07.063758] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:11.092 [2024-11-25 12:18:07.077203] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:19:11.092 12:18:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.092 12:18:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:11.092 [2024-11-25 12:18:07.080100] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:12.026 12:18:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:12.026 12:18:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:12.026 12:18:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:12.026 12:18:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:12.026 12:18:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:12.026 12:18:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.026 12:18:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.026 12:18:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.026 12:18:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:12.026 12:18:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.285 12:18:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:12.285 "name": "raid_bdev1", 00:19:12.285 "uuid": "56433b9f-224f-4e8c-875e-176aa7ea907e", 00:19:12.285 "strip_size_kb": 0, 00:19:12.285 "state": "online", 00:19:12.285 "raid_level": "raid1", 00:19:12.285 "superblock": true, 00:19:12.285 "num_base_bdevs": 4, 00:19:12.285 "num_base_bdevs_discovered": 3, 00:19:12.285 "num_base_bdevs_operational": 3, 00:19:12.285 "process": { 00:19:12.285 "type": "rebuild", 00:19:12.285 "target": "spare", 00:19:12.285 "progress": { 00:19:12.285 "blocks": 20480, 00:19:12.285 "percent": 32 00:19:12.285 } 00:19:12.285 }, 00:19:12.285 "base_bdevs_list": [ 00:19:12.285 { 00:19:12.285 "name": "spare", 00:19:12.285 "uuid": "258fe83a-91bc-5377-9b83-a0015162b093", 00:19:12.285 "is_configured": true, 00:19:12.285 "data_offset": 2048, 00:19:12.285 "data_size": 63488 00:19:12.285 }, 00:19:12.285 { 00:19:12.285 "name": null, 00:19:12.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.285 "is_configured": false, 00:19:12.285 "data_offset": 2048, 00:19:12.285 "data_size": 63488 00:19:12.285 }, 00:19:12.285 { 00:19:12.285 "name": "BaseBdev3", 00:19:12.285 "uuid": "d0ef5c48-b1dc-50b6-99ac-b9bdfcf84274", 00:19:12.285 "is_configured": true, 00:19:12.285 "data_offset": 2048, 00:19:12.285 "data_size": 63488 00:19:12.285 }, 00:19:12.285 { 00:19:12.285 "name": "BaseBdev4", 00:19:12.285 "uuid": "d624b5da-19ed-561e-b904-d72fd10442fd", 00:19:12.285 "is_configured": true, 00:19:12.285 "data_offset": 2048, 00:19:12.285 "data_size": 63488 00:19:12.285 } 00:19:12.285 ] 00:19:12.285 }' 00:19:12.285 12:18:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:12.285 12:18:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:12.285 12:18:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:12.285 12:18:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:12.286 12:18:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:12.286 12:18:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.286 12:18:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.286 [2024-11-25 12:18:08.249191] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:12.286 [2024-11-25 12:18:08.289123] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:12.286 [2024-11-25 12:18:08.289204] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:12.286 [2024-11-25 12:18:08.289235] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:12.286 [2024-11-25 12:18:08.289247] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:12.286 12:18:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.286 12:18:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:12.286 12:18:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:12.286 12:18:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:12.286 12:18:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:12.286 12:18:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:12.286 12:18:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:12.286 12:18:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:12.286 12:18:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:12.286 12:18:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:12.286 12:18:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:12.286 12:18:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:12.286 12:18:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.286 12:18:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.286 12:18:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.286 12:18:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.286 12:18:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:12.286 "name": "raid_bdev1", 00:19:12.286 "uuid": "56433b9f-224f-4e8c-875e-176aa7ea907e", 00:19:12.286 "strip_size_kb": 0, 00:19:12.286 "state": "online", 00:19:12.286 "raid_level": "raid1", 00:19:12.286 "superblock": true, 00:19:12.286 "num_base_bdevs": 4, 00:19:12.286 "num_base_bdevs_discovered": 2, 00:19:12.286 "num_base_bdevs_operational": 2, 00:19:12.286 "base_bdevs_list": [ 00:19:12.286 { 00:19:12.286 "name": null, 00:19:12.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.286 "is_configured": false, 00:19:12.286 "data_offset": 0, 00:19:12.286 "data_size": 63488 00:19:12.286 }, 00:19:12.286 { 00:19:12.286 "name": null, 00:19:12.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.286 "is_configured": false, 00:19:12.286 "data_offset": 2048, 00:19:12.286 "data_size": 63488 00:19:12.286 }, 00:19:12.286 { 00:19:12.286 "name": "BaseBdev3", 00:19:12.286 "uuid": "d0ef5c48-b1dc-50b6-99ac-b9bdfcf84274", 00:19:12.286 "is_configured": true, 00:19:12.286 "data_offset": 2048, 00:19:12.286 "data_size": 63488 00:19:12.286 }, 00:19:12.286 { 00:19:12.286 "name": "BaseBdev4", 00:19:12.286 "uuid": "d624b5da-19ed-561e-b904-d72fd10442fd", 00:19:12.286 "is_configured": true, 00:19:12.286 "data_offset": 2048, 00:19:12.286 "data_size": 63488 00:19:12.286 } 00:19:12.286 ] 00:19:12.286 }' 00:19:12.286 12:18:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:12.286 12:18:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.853 12:18:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:12.853 12:18:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.853 12:18:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.853 [2024-11-25 12:18:08.857093] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:12.853 [2024-11-25 12:18:08.857308] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:12.853 [2024-11-25 12:18:08.857511] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:19:12.853 [2024-11-25 12:18:08.857652] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:12.853 [2024-11-25 12:18:08.858299] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:12.853 [2024-11-25 12:18:08.858365] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:12.853 [2024-11-25 12:18:08.858516] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:12.853 [2024-11-25 12:18:08.858537] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:19:12.853 [2024-11-25 12:18:08.858555] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:12.853 [2024-11-25 12:18:08.858595] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:12.853 [2024-11-25 12:18:08.872302] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:19:12.853 spare 00:19:12.853 12:18:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.853 12:18:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:12.853 [2024-11-25 12:18:08.874925] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:13.793 12:18:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:13.793 12:18:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:13.793 12:18:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:13.793 12:18:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:13.793 12:18:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:14.062 12:18:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.062 12:18:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.062 12:18:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:14.062 12:18:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.062 12:18:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.062 12:18:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:14.062 "name": "raid_bdev1", 00:19:14.062 "uuid": "56433b9f-224f-4e8c-875e-176aa7ea907e", 00:19:14.062 "strip_size_kb": 0, 00:19:14.062 "state": "online", 00:19:14.062 "raid_level": "raid1", 00:19:14.062 "superblock": true, 00:19:14.062 "num_base_bdevs": 4, 00:19:14.062 "num_base_bdevs_discovered": 3, 00:19:14.062 "num_base_bdevs_operational": 3, 00:19:14.062 "process": { 00:19:14.062 "type": "rebuild", 00:19:14.062 "target": "spare", 00:19:14.062 "progress": { 00:19:14.062 "blocks": 20480, 00:19:14.062 "percent": 32 00:19:14.062 } 00:19:14.062 }, 00:19:14.062 "base_bdevs_list": [ 00:19:14.062 { 00:19:14.062 "name": "spare", 00:19:14.062 "uuid": "258fe83a-91bc-5377-9b83-a0015162b093", 00:19:14.062 "is_configured": true, 00:19:14.062 "data_offset": 2048, 00:19:14.062 "data_size": 63488 00:19:14.062 }, 00:19:14.062 { 00:19:14.062 "name": null, 00:19:14.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.062 "is_configured": false, 00:19:14.062 "data_offset": 2048, 00:19:14.062 "data_size": 63488 00:19:14.062 }, 00:19:14.062 { 00:19:14.062 "name": "BaseBdev3", 00:19:14.062 "uuid": "d0ef5c48-b1dc-50b6-99ac-b9bdfcf84274", 00:19:14.062 "is_configured": true, 00:19:14.062 "data_offset": 2048, 00:19:14.062 "data_size": 63488 00:19:14.062 }, 00:19:14.062 { 00:19:14.062 "name": "BaseBdev4", 00:19:14.062 "uuid": "d624b5da-19ed-561e-b904-d72fd10442fd", 00:19:14.062 "is_configured": true, 00:19:14.062 "data_offset": 2048, 00:19:14.062 "data_size": 63488 00:19:14.062 } 00:19:14.062 ] 00:19:14.062 }' 00:19:14.062 12:18:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:14.062 12:18:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:14.062 12:18:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:14.062 12:18:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:14.062 12:18:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:14.062 12:18:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.062 12:18:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.062 [2024-11-25 12:18:10.032186] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:14.062 [2024-11-25 12:18:10.084152] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:14.062 [2024-11-25 12:18:10.084241] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:14.062 [2024-11-25 12:18:10.084266] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:14.062 [2024-11-25 12:18:10.084281] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:14.062 12:18:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.062 12:18:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:14.062 12:18:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:14.062 12:18:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:14.062 12:18:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:14.063 12:18:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:14.063 12:18:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:14.063 12:18:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:14.063 12:18:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:14.063 12:18:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:14.063 12:18:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:14.063 12:18:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.063 12:18:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:14.063 12:18:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.063 12:18:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.063 12:18:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.322 12:18:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:14.322 "name": "raid_bdev1", 00:19:14.322 "uuid": "56433b9f-224f-4e8c-875e-176aa7ea907e", 00:19:14.322 "strip_size_kb": 0, 00:19:14.322 "state": "online", 00:19:14.322 "raid_level": "raid1", 00:19:14.322 "superblock": true, 00:19:14.322 "num_base_bdevs": 4, 00:19:14.322 "num_base_bdevs_discovered": 2, 00:19:14.322 "num_base_bdevs_operational": 2, 00:19:14.322 "base_bdevs_list": [ 00:19:14.322 { 00:19:14.322 "name": null, 00:19:14.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.322 "is_configured": false, 00:19:14.322 "data_offset": 0, 00:19:14.322 "data_size": 63488 00:19:14.322 }, 00:19:14.322 { 00:19:14.322 "name": null, 00:19:14.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.322 "is_configured": false, 00:19:14.322 "data_offset": 2048, 00:19:14.322 "data_size": 63488 00:19:14.322 }, 00:19:14.322 { 00:19:14.322 "name": "BaseBdev3", 00:19:14.322 "uuid": "d0ef5c48-b1dc-50b6-99ac-b9bdfcf84274", 00:19:14.322 "is_configured": true, 00:19:14.322 "data_offset": 2048, 00:19:14.322 "data_size": 63488 00:19:14.322 }, 00:19:14.322 { 00:19:14.322 "name": "BaseBdev4", 00:19:14.322 "uuid": "d624b5da-19ed-561e-b904-d72fd10442fd", 00:19:14.322 "is_configured": true, 00:19:14.322 "data_offset": 2048, 00:19:14.322 "data_size": 63488 00:19:14.322 } 00:19:14.322 ] 00:19:14.322 }' 00:19:14.322 12:18:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:14.322 12:18:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.581 12:18:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:14.581 12:18:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:14.581 12:18:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:14.581 12:18:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:14.581 12:18:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:14.581 12:18:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:14.581 12:18:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.581 12:18:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.581 12:18:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.581 12:18:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.842 12:18:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:14.842 "name": "raid_bdev1", 00:19:14.842 "uuid": "56433b9f-224f-4e8c-875e-176aa7ea907e", 00:19:14.842 "strip_size_kb": 0, 00:19:14.842 "state": "online", 00:19:14.842 "raid_level": "raid1", 00:19:14.842 "superblock": true, 00:19:14.842 "num_base_bdevs": 4, 00:19:14.842 "num_base_bdevs_discovered": 2, 00:19:14.842 "num_base_bdevs_operational": 2, 00:19:14.842 "base_bdevs_list": [ 00:19:14.842 { 00:19:14.842 "name": null, 00:19:14.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.842 "is_configured": false, 00:19:14.842 "data_offset": 0, 00:19:14.842 "data_size": 63488 00:19:14.842 }, 00:19:14.842 { 00:19:14.842 "name": null, 00:19:14.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.842 "is_configured": false, 00:19:14.842 "data_offset": 2048, 00:19:14.842 "data_size": 63488 00:19:14.842 }, 00:19:14.842 { 00:19:14.842 "name": "BaseBdev3", 00:19:14.842 "uuid": "d0ef5c48-b1dc-50b6-99ac-b9bdfcf84274", 00:19:14.842 "is_configured": true, 00:19:14.842 "data_offset": 2048, 00:19:14.842 "data_size": 63488 00:19:14.842 }, 00:19:14.842 { 00:19:14.842 "name": "BaseBdev4", 00:19:14.842 "uuid": "d624b5da-19ed-561e-b904-d72fd10442fd", 00:19:14.842 "is_configured": true, 00:19:14.842 "data_offset": 2048, 00:19:14.842 "data_size": 63488 00:19:14.842 } 00:19:14.842 ] 00:19:14.842 }' 00:19:14.842 12:18:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:14.842 12:18:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:14.842 12:18:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:14.842 12:18:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:14.842 12:18:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:14.842 12:18:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.842 12:18:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.842 12:18:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.842 12:18:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:14.842 12:18:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.842 12:18:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.842 [2024-11-25 12:18:10.804285] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:14.842 [2024-11-25 12:18:10.804393] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:14.842 [2024-11-25 12:18:10.804426] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:19:14.842 [2024-11-25 12:18:10.804445] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:14.842 [2024-11-25 12:18:10.805023] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:14.842 [2024-11-25 12:18:10.805062] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:14.842 [2024-11-25 12:18:10.805186] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:14.842 [2024-11-25 12:18:10.805219] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:19:14.842 [2024-11-25 12:18:10.805232] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:14.842 [2024-11-25 12:18:10.805267] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:14.842 BaseBdev1 00:19:14.842 12:18:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.842 12:18:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:15.779 12:18:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:15.779 12:18:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:15.779 12:18:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:15.779 12:18:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:15.779 12:18:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:15.779 12:18:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:15.779 12:18:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:15.779 12:18:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:15.779 12:18:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:15.779 12:18:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:15.779 12:18:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.779 12:18:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.779 12:18:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.779 12:18:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.779 12:18:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.779 12:18:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:15.779 "name": "raid_bdev1", 00:19:15.779 "uuid": "56433b9f-224f-4e8c-875e-176aa7ea907e", 00:19:15.779 "strip_size_kb": 0, 00:19:15.779 "state": "online", 00:19:15.779 "raid_level": "raid1", 00:19:15.779 "superblock": true, 00:19:15.779 "num_base_bdevs": 4, 00:19:15.779 "num_base_bdevs_discovered": 2, 00:19:15.779 "num_base_bdevs_operational": 2, 00:19:15.779 "base_bdevs_list": [ 00:19:15.779 { 00:19:15.779 "name": null, 00:19:15.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.779 "is_configured": false, 00:19:15.779 "data_offset": 0, 00:19:15.779 "data_size": 63488 00:19:15.779 }, 00:19:15.779 { 00:19:15.779 "name": null, 00:19:15.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.779 "is_configured": false, 00:19:15.779 "data_offset": 2048, 00:19:15.779 "data_size": 63488 00:19:15.779 }, 00:19:15.779 { 00:19:15.779 "name": "BaseBdev3", 00:19:15.779 "uuid": "d0ef5c48-b1dc-50b6-99ac-b9bdfcf84274", 00:19:15.779 "is_configured": true, 00:19:15.779 "data_offset": 2048, 00:19:15.779 "data_size": 63488 00:19:15.779 }, 00:19:15.779 { 00:19:15.779 "name": "BaseBdev4", 00:19:15.779 "uuid": "d624b5da-19ed-561e-b904-d72fd10442fd", 00:19:15.779 "is_configured": true, 00:19:15.779 "data_offset": 2048, 00:19:15.779 "data_size": 63488 00:19:15.779 } 00:19:15.779 ] 00:19:15.779 }' 00:19:15.779 12:18:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:15.779 12:18:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.347 12:18:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:16.347 12:18:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:16.347 12:18:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:16.347 12:18:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:16.347 12:18:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:16.347 12:18:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.347 12:18:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:16.347 12:18:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.347 12:18:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.347 12:18:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.347 12:18:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:16.347 "name": "raid_bdev1", 00:19:16.347 "uuid": "56433b9f-224f-4e8c-875e-176aa7ea907e", 00:19:16.347 "strip_size_kb": 0, 00:19:16.347 "state": "online", 00:19:16.347 "raid_level": "raid1", 00:19:16.347 "superblock": true, 00:19:16.347 "num_base_bdevs": 4, 00:19:16.347 "num_base_bdevs_discovered": 2, 00:19:16.347 "num_base_bdevs_operational": 2, 00:19:16.347 "base_bdevs_list": [ 00:19:16.347 { 00:19:16.347 "name": null, 00:19:16.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:16.347 "is_configured": false, 00:19:16.347 "data_offset": 0, 00:19:16.347 "data_size": 63488 00:19:16.347 }, 00:19:16.347 { 00:19:16.347 "name": null, 00:19:16.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:16.347 "is_configured": false, 00:19:16.347 "data_offset": 2048, 00:19:16.347 "data_size": 63488 00:19:16.347 }, 00:19:16.347 { 00:19:16.347 "name": "BaseBdev3", 00:19:16.347 "uuid": "d0ef5c48-b1dc-50b6-99ac-b9bdfcf84274", 00:19:16.347 "is_configured": true, 00:19:16.347 "data_offset": 2048, 00:19:16.347 "data_size": 63488 00:19:16.347 }, 00:19:16.347 { 00:19:16.347 "name": "BaseBdev4", 00:19:16.347 "uuid": "d624b5da-19ed-561e-b904-d72fd10442fd", 00:19:16.347 "is_configured": true, 00:19:16.347 "data_offset": 2048, 00:19:16.347 "data_size": 63488 00:19:16.347 } 00:19:16.347 ] 00:19:16.347 }' 00:19:16.347 12:18:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:16.347 12:18:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:16.347 12:18:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:16.606 12:18:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:16.606 12:18:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:16.606 12:18:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:19:16.606 12:18:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:16.606 12:18:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:16.606 12:18:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:16.606 12:18:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:16.606 12:18:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:16.606 12:18:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:16.606 12:18:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.606 12:18:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.606 [2024-11-25 12:18:12.484833] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:16.606 [2024-11-25 12:18:12.485232] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:19:16.606 [2024-11-25 12:18:12.485420] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:16.606 request: 00:19:16.606 { 00:19:16.606 "base_bdev": "BaseBdev1", 00:19:16.606 "raid_bdev": "raid_bdev1", 00:19:16.606 "method": "bdev_raid_add_base_bdev", 00:19:16.606 "req_id": 1 00:19:16.606 } 00:19:16.606 Got JSON-RPC error response 00:19:16.606 response: 00:19:16.606 { 00:19:16.606 "code": -22, 00:19:16.606 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:16.606 } 00:19:16.606 12:18:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:16.606 12:18:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:19:16.606 12:18:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:16.606 12:18:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:16.606 12:18:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:16.606 12:18:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:17.542 12:18:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:17.542 12:18:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:17.542 12:18:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:17.542 12:18:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:17.542 12:18:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:17.542 12:18:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:17.542 12:18:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:17.542 12:18:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:17.542 12:18:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:17.542 12:18:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:17.542 12:18:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.542 12:18:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.542 12:18:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.542 12:18:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.542 12:18:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.542 12:18:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:17.542 "name": "raid_bdev1", 00:19:17.542 "uuid": "56433b9f-224f-4e8c-875e-176aa7ea907e", 00:19:17.542 "strip_size_kb": 0, 00:19:17.542 "state": "online", 00:19:17.543 "raid_level": "raid1", 00:19:17.543 "superblock": true, 00:19:17.543 "num_base_bdevs": 4, 00:19:17.543 "num_base_bdevs_discovered": 2, 00:19:17.543 "num_base_bdevs_operational": 2, 00:19:17.543 "base_bdevs_list": [ 00:19:17.543 { 00:19:17.543 "name": null, 00:19:17.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.543 "is_configured": false, 00:19:17.543 "data_offset": 0, 00:19:17.543 "data_size": 63488 00:19:17.543 }, 00:19:17.543 { 00:19:17.543 "name": null, 00:19:17.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.543 "is_configured": false, 00:19:17.543 "data_offset": 2048, 00:19:17.543 "data_size": 63488 00:19:17.543 }, 00:19:17.543 { 00:19:17.543 "name": "BaseBdev3", 00:19:17.543 "uuid": "d0ef5c48-b1dc-50b6-99ac-b9bdfcf84274", 00:19:17.543 "is_configured": true, 00:19:17.543 "data_offset": 2048, 00:19:17.543 "data_size": 63488 00:19:17.543 }, 00:19:17.543 { 00:19:17.543 "name": "BaseBdev4", 00:19:17.543 "uuid": "d624b5da-19ed-561e-b904-d72fd10442fd", 00:19:17.543 "is_configured": true, 00:19:17.543 "data_offset": 2048, 00:19:17.543 "data_size": 63488 00:19:17.543 } 00:19:17.543 ] 00:19:17.543 }' 00:19:17.543 12:18:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:17.543 12:18:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.129 12:18:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:18.129 12:18:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:18.129 12:18:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:18.129 12:18:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:18.129 12:18:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:18.129 12:18:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.129 12:18:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.129 12:18:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.129 12:18:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.129 12:18:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.129 12:18:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:18.129 "name": "raid_bdev1", 00:19:18.129 "uuid": "56433b9f-224f-4e8c-875e-176aa7ea907e", 00:19:18.129 "strip_size_kb": 0, 00:19:18.129 "state": "online", 00:19:18.129 "raid_level": "raid1", 00:19:18.129 "superblock": true, 00:19:18.129 "num_base_bdevs": 4, 00:19:18.129 "num_base_bdevs_discovered": 2, 00:19:18.129 "num_base_bdevs_operational": 2, 00:19:18.129 "base_bdevs_list": [ 00:19:18.129 { 00:19:18.129 "name": null, 00:19:18.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:18.130 "is_configured": false, 00:19:18.130 "data_offset": 0, 00:19:18.130 "data_size": 63488 00:19:18.130 }, 00:19:18.130 { 00:19:18.130 "name": null, 00:19:18.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:18.130 "is_configured": false, 00:19:18.130 "data_offset": 2048, 00:19:18.130 "data_size": 63488 00:19:18.130 }, 00:19:18.130 { 00:19:18.130 "name": "BaseBdev3", 00:19:18.130 "uuid": "d0ef5c48-b1dc-50b6-99ac-b9bdfcf84274", 00:19:18.130 "is_configured": true, 00:19:18.130 "data_offset": 2048, 00:19:18.130 "data_size": 63488 00:19:18.130 }, 00:19:18.130 { 00:19:18.130 "name": "BaseBdev4", 00:19:18.130 "uuid": "d624b5da-19ed-561e-b904-d72fd10442fd", 00:19:18.130 "is_configured": true, 00:19:18.130 "data_offset": 2048, 00:19:18.130 "data_size": 63488 00:19:18.130 } 00:19:18.130 ] 00:19:18.130 }' 00:19:18.130 12:18:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:18.130 12:18:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:18.130 12:18:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:18.130 12:18:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:18.130 12:18:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78277 00:19:18.130 12:18:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 78277 ']' 00:19:18.130 12:18:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 78277 00:19:18.130 12:18:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:19:18.130 12:18:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:18.130 12:18:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78277 00:19:18.130 killing process with pid 78277 00:19:18.130 Received shutdown signal, test time was about 60.000000 seconds 00:19:18.130 00:19:18.130 Latency(us) 00:19:18.130 [2024-11-25T12:18:14.221Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:18.130 [2024-11-25T12:18:14.221Z] =================================================================================================================== 00:19:18.130 [2024-11-25T12:18:14.221Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:18.130 12:18:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:18.130 12:18:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:18.130 12:18:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78277' 00:19:18.130 12:18:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 78277 00:19:18.130 [2024-11-25 12:18:14.209608] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:18.130 12:18:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 78277 00:19:18.130 [2024-11-25 12:18:14.209762] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:18.130 [2024-11-25 12:18:14.209857] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:18.130 [2024-11-25 12:18:14.209874] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:18.695 [2024-11-25 12:18:14.651725] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:19.629 12:18:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:19:19.629 ************************************ 00:19:19.629 END TEST raid_rebuild_test_sb 00:19:19.629 ************************************ 00:19:19.629 00:19:19.629 real 0m29.296s 00:19:19.629 user 0m35.247s 00:19:19.629 sys 0m4.046s 00:19:19.629 12:18:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:19.629 12:18:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:19.888 12:18:15 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:19:19.888 12:18:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:19.888 12:18:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:19.888 12:18:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:19.888 ************************************ 00:19:19.888 START TEST raid_rebuild_test_io 00:19:19.888 ************************************ 00:19:19.888 12:18:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:19:19.888 12:18:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:19.888 12:18:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:19:19.888 12:18:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:19:19.888 12:18:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:19:19.888 12:18:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:19.888 12:18:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:19.888 12:18:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:19.888 12:18:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:19.888 12:18:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:19.888 12:18:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:19.888 12:18:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:19.888 12:18:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:19.888 12:18:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:19.888 12:18:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:19:19.888 12:18:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:19.888 12:18:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:19.888 12:18:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:19:19.888 12:18:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:19.888 12:18:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:19.888 12:18:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:19.888 12:18:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:19.888 12:18:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:19.888 12:18:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:19.888 12:18:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:19.888 12:18:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:19.888 12:18:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:19.888 12:18:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:19.888 12:18:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:19.888 12:18:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:19:19.888 12:18:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79076 00:19:19.888 12:18:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79076 00:19:19.888 12:18:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:19.888 12:18:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 79076 ']' 00:19:19.888 12:18:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:19.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:19.888 12:18:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:19.888 12:18:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:19.889 12:18:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:19.889 12:18:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:19.889 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:19.889 Zero copy mechanism will not be used. 00:19:19.889 [2024-11-25 12:18:15.873270] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:19:19.889 [2024-11-25 12:18:15.873469] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79076 ] 00:19:20.148 [2024-11-25 12:18:16.065885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:20.148 [2024-11-25 12:18:16.220678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:20.406 [2024-11-25 12:18:16.442458] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:20.406 [2024-11-25 12:18:16.442740] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:20.972 12:18:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:20.972 12:18:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:19:20.972 12:18:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:20.972 12:18:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:20.972 12:18:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.972 12:18:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:20.972 BaseBdev1_malloc 00:19:20.972 12:18:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.972 12:18:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:20.972 12:18:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.972 12:18:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:20.972 [2024-11-25 12:18:16.947048] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:20.972 [2024-11-25 12:18:16.947129] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:20.972 [2024-11-25 12:18:16.947165] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:20.972 [2024-11-25 12:18:16.947183] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:20.972 [2024-11-25 12:18:16.950002] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:20.972 [2024-11-25 12:18:16.950054] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:20.972 BaseBdev1 00:19:20.972 12:18:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.972 12:18:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:20.972 12:18:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:20.972 12:18:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.972 12:18:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:20.972 BaseBdev2_malloc 00:19:20.972 12:18:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.972 12:18:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:20.972 12:18:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.972 12:18:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:20.972 [2024-11-25 12:18:16.999364] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:20.972 [2024-11-25 12:18:16.999569] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:20.972 [2024-11-25 12:18:16.999606] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:20.972 [2024-11-25 12:18:16.999626] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:20.972 [2024-11-25 12:18:17.002312] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:20.972 [2024-11-25 12:18:17.002371] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:20.972 BaseBdev2 00:19:20.972 12:18:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.972 12:18:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:20.972 12:18:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:20.972 12:18:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.972 12:18:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:20.972 BaseBdev3_malloc 00:19:20.972 12:18:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.972 12:18:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:19:20.972 12:18:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.972 12:18:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:21.231 [2024-11-25 12:18:17.066675] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:19:21.231 [2024-11-25 12:18:17.066887] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:21.232 [2024-11-25 12:18:17.067061] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:21.232 [2024-11-25 12:18:17.067193] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:21.232 [2024-11-25 12:18:17.070129] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:21.232 [2024-11-25 12:18:17.070299] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:21.232 BaseBdev3 00:19:21.232 12:18:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.232 12:18:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:21.232 12:18:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:19:21.232 12:18:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.232 12:18:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:21.232 BaseBdev4_malloc 00:19:21.232 12:18:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.232 12:18:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:19:21.232 12:18:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.232 12:18:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:21.232 [2024-11-25 12:18:17.123593] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:19:21.232 [2024-11-25 12:18:17.123837] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:21.232 [2024-11-25 12:18:17.123874] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:19:21.232 [2024-11-25 12:18:17.123891] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:21.232 [2024-11-25 12:18:17.126711] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:21.232 BaseBdev4 00:19:21.232 [2024-11-25 12:18:17.126917] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:19:21.232 12:18:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.232 12:18:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:19:21.232 12:18:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.232 12:18:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:21.232 spare_malloc 00:19:21.232 12:18:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.232 12:18:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:21.232 12:18:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.232 12:18:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:21.232 spare_delay 00:19:21.232 12:18:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.232 12:18:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:21.232 12:18:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.232 12:18:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:21.232 [2024-11-25 12:18:17.184096] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:21.232 [2024-11-25 12:18:17.184305] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:21.232 [2024-11-25 12:18:17.184444] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:21.232 [2024-11-25 12:18:17.184580] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:21.232 [2024-11-25 12:18:17.187447] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:21.232 [2024-11-25 12:18:17.187608] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:21.232 spare 00:19:21.232 12:18:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.232 12:18:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:19:21.232 12:18:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.232 12:18:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:21.232 [2024-11-25 12:18:17.192365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:21.232 [2024-11-25 12:18:17.194882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:21.232 [2024-11-25 12:18:17.195111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:21.232 [2024-11-25 12:18:17.195303] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:21.232 [2024-11-25 12:18:17.195541] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:21.232 [2024-11-25 12:18:17.195575] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:21.232 [2024-11-25 12:18:17.195915] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:21.232 [2024-11-25 12:18:17.196145] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:21.232 [2024-11-25 12:18:17.196165] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:21.232 [2024-11-25 12:18:17.196446] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:21.232 12:18:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.232 12:18:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:19:21.232 12:18:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:21.232 12:18:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:21.232 12:18:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:21.232 12:18:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:21.232 12:18:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:21.232 12:18:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:21.232 12:18:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:21.232 12:18:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:21.232 12:18:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:21.232 12:18:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.232 12:18:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.232 12:18:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.232 12:18:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:21.232 12:18:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.232 12:18:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:21.232 "name": "raid_bdev1", 00:19:21.232 "uuid": "0f16ede4-46e9-4c33-a1de-1aa2b09a998c", 00:19:21.232 "strip_size_kb": 0, 00:19:21.232 "state": "online", 00:19:21.232 "raid_level": "raid1", 00:19:21.232 "superblock": false, 00:19:21.232 "num_base_bdevs": 4, 00:19:21.232 "num_base_bdevs_discovered": 4, 00:19:21.232 "num_base_bdevs_operational": 4, 00:19:21.232 "base_bdevs_list": [ 00:19:21.232 { 00:19:21.232 "name": "BaseBdev1", 00:19:21.232 "uuid": "9513a66a-a274-594a-98a4-29310d3d78e1", 00:19:21.232 "is_configured": true, 00:19:21.232 "data_offset": 0, 00:19:21.232 "data_size": 65536 00:19:21.232 }, 00:19:21.232 { 00:19:21.232 "name": "BaseBdev2", 00:19:21.232 "uuid": "444ee926-3b6c-5122-84a6-5c29e791952d", 00:19:21.232 "is_configured": true, 00:19:21.232 "data_offset": 0, 00:19:21.232 "data_size": 65536 00:19:21.232 }, 00:19:21.232 { 00:19:21.232 "name": "BaseBdev3", 00:19:21.232 "uuid": "04d571eb-aa74-55ec-b8b5-0c1b3197d1e8", 00:19:21.232 "is_configured": true, 00:19:21.232 "data_offset": 0, 00:19:21.232 "data_size": 65536 00:19:21.232 }, 00:19:21.232 { 00:19:21.232 "name": "BaseBdev4", 00:19:21.232 "uuid": "4918a91a-5708-590f-bcdb-ba981e4866fe", 00:19:21.232 "is_configured": true, 00:19:21.232 "data_offset": 0, 00:19:21.232 "data_size": 65536 00:19:21.232 } 00:19:21.232 ] 00:19:21.232 }' 00:19:21.232 12:18:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:21.232 12:18:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:21.798 12:18:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:21.798 12:18:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:21.798 12:18:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.798 12:18:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:21.798 [2024-11-25 12:18:17.716941] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:21.798 12:18:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.798 12:18:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:19:21.798 12:18:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.798 12:18:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.798 12:18:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:21.798 12:18:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:21.798 12:18:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.798 12:18:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:19:21.798 12:18:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:19:21.798 12:18:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:21.798 12:18:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:19:21.798 12:18:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.798 12:18:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:21.798 [2024-11-25 12:18:17.820498] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:21.798 12:18:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.798 12:18:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:21.798 12:18:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:21.798 12:18:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:21.798 12:18:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:21.798 12:18:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:21.798 12:18:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:21.798 12:18:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:21.798 12:18:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:21.798 12:18:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:21.798 12:18:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:21.798 12:18:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.798 12:18:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.798 12:18:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.798 12:18:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:21.798 12:18:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.798 12:18:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:21.798 "name": "raid_bdev1", 00:19:21.799 "uuid": "0f16ede4-46e9-4c33-a1de-1aa2b09a998c", 00:19:21.799 "strip_size_kb": 0, 00:19:21.799 "state": "online", 00:19:21.799 "raid_level": "raid1", 00:19:21.799 "superblock": false, 00:19:21.799 "num_base_bdevs": 4, 00:19:21.799 "num_base_bdevs_discovered": 3, 00:19:21.799 "num_base_bdevs_operational": 3, 00:19:21.799 "base_bdevs_list": [ 00:19:21.799 { 00:19:21.799 "name": null, 00:19:21.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.799 "is_configured": false, 00:19:21.799 "data_offset": 0, 00:19:21.799 "data_size": 65536 00:19:21.799 }, 00:19:21.799 { 00:19:21.799 "name": "BaseBdev2", 00:19:21.799 "uuid": "444ee926-3b6c-5122-84a6-5c29e791952d", 00:19:21.799 "is_configured": true, 00:19:21.799 "data_offset": 0, 00:19:21.799 "data_size": 65536 00:19:21.799 }, 00:19:21.799 { 00:19:21.799 "name": "BaseBdev3", 00:19:21.799 "uuid": "04d571eb-aa74-55ec-b8b5-0c1b3197d1e8", 00:19:21.799 "is_configured": true, 00:19:21.799 "data_offset": 0, 00:19:21.799 "data_size": 65536 00:19:21.799 }, 00:19:21.799 { 00:19:21.799 "name": "BaseBdev4", 00:19:21.799 "uuid": "4918a91a-5708-590f-bcdb-ba981e4866fe", 00:19:21.799 "is_configured": true, 00:19:21.799 "data_offset": 0, 00:19:21.799 "data_size": 65536 00:19:21.799 } 00:19:21.799 ] 00:19:21.799 }' 00:19:21.799 12:18:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:21.799 12:18:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:22.057 [2024-11-25 12:18:17.976597] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:19:22.057 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:22.057 Zero copy mechanism will not be used. 00:19:22.057 Running I/O for 60 seconds... 00:19:22.338 12:18:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:22.338 12:18:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.338 12:18:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:22.338 [2024-11-25 12:18:18.326766] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:22.338 12:18:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.338 12:18:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:22.338 [2024-11-25 12:18:18.382655] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:19:22.338 [2024-11-25 12:18:18.385187] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:22.603 [2024-11-25 12:18:18.497412] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:22.603 [2024-11-25 12:18:18.498991] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:22.863 [2024-11-25 12:18:18.760452] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:22.863 [2024-11-25 12:18:18.761033] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:23.123 141.00 IOPS, 423.00 MiB/s [2024-11-25T12:18:19.214Z] [2024-11-25 12:18:19.127544] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:23.381 [2024-11-25 12:18:19.249961] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:23.381 [2024-11-25 12:18:19.250563] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:23.381 12:18:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:23.381 12:18:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:23.381 12:18:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:23.381 12:18:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:23.381 12:18:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:23.381 12:18:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.381 12:18:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.381 12:18:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.381 12:18:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:23.381 12:18:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.381 12:18:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:23.381 "name": "raid_bdev1", 00:19:23.381 "uuid": "0f16ede4-46e9-4c33-a1de-1aa2b09a998c", 00:19:23.381 "strip_size_kb": 0, 00:19:23.381 "state": "online", 00:19:23.381 "raid_level": "raid1", 00:19:23.381 "superblock": false, 00:19:23.381 "num_base_bdevs": 4, 00:19:23.381 "num_base_bdevs_discovered": 4, 00:19:23.381 "num_base_bdevs_operational": 4, 00:19:23.381 "process": { 00:19:23.381 "type": "rebuild", 00:19:23.381 "target": "spare", 00:19:23.381 "progress": { 00:19:23.381 "blocks": 10240, 00:19:23.381 "percent": 15 00:19:23.381 } 00:19:23.381 }, 00:19:23.381 "base_bdevs_list": [ 00:19:23.381 { 00:19:23.381 "name": "spare", 00:19:23.381 "uuid": "79ddca21-9dae-5a11-b59a-a706dd20664f", 00:19:23.381 "is_configured": true, 00:19:23.381 "data_offset": 0, 00:19:23.381 "data_size": 65536 00:19:23.381 }, 00:19:23.381 { 00:19:23.381 "name": "BaseBdev2", 00:19:23.381 "uuid": "444ee926-3b6c-5122-84a6-5c29e791952d", 00:19:23.381 "is_configured": true, 00:19:23.381 "data_offset": 0, 00:19:23.381 "data_size": 65536 00:19:23.381 }, 00:19:23.381 { 00:19:23.381 "name": "BaseBdev3", 00:19:23.381 "uuid": "04d571eb-aa74-55ec-b8b5-0c1b3197d1e8", 00:19:23.381 "is_configured": true, 00:19:23.381 "data_offset": 0, 00:19:23.381 "data_size": 65536 00:19:23.381 }, 00:19:23.381 { 00:19:23.381 "name": "BaseBdev4", 00:19:23.381 "uuid": "4918a91a-5708-590f-bcdb-ba981e4866fe", 00:19:23.381 "is_configured": true, 00:19:23.381 "data_offset": 0, 00:19:23.381 "data_size": 65536 00:19:23.381 } 00:19:23.381 ] 00:19:23.381 }' 00:19:23.381 12:18:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:23.639 12:18:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:23.639 12:18:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:23.639 12:18:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:23.639 12:18:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:23.639 12:18:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.640 12:18:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:23.640 [2024-11-25 12:18:19.545651] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:23.640 [2024-11-25 12:18:19.708284] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:23.898 [2024-11-25 12:18:19.730137] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:23.898 [2024-11-25 12:18:19.730242] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:23.898 [2024-11-25 12:18:19.730263] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:23.898 [2024-11-25 12:18:19.770291] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:19:23.898 12:18:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.898 12:18:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:23.898 12:18:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:23.898 12:18:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:23.898 12:18:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:23.898 12:18:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:23.898 12:18:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:23.898 12:18:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:23.898 12:18:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:23.898 12:18:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:23.898 12:18:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:23.898 12:18:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.898 12:18:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.898 12:18:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.898 12:18:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:23.898 12:18:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.899 12:18:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:23.899 "name": "raid_bdev1", 00:19:23.899 "uuid": "0f16ede4-46e9-4c33-a1de-1aa2b09a998c", 00:19:23.899 "strip_size_kb": 0, 00:19:23.899 "state": "online", 00:19:23.899 "raid_level": "raid1", 00:19:23.899 "superblock": false, 00:19:23.899 "num_base_bdevs": 4, 00:19:23.899 "num_base_bdevs_discovered": 3, 00:19:23.899 "num_base_bdevs_operational": 3, 00:19:23.899 "base_bdevs_list": [ 00:19:23.899 { 00:19:23.899 "name": null, 00:19:23.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.899 "is_configured": false, 00:19:23.899 "data_offset": 0, 00:19:23.899 "data_size": 65536 00:19:23.899 }, 00:19:23.899 { 00:19:23.899 "name": "BaseBdev2", 00:19:23.899 "uuid": "444ee926-3b6c-5122-84a6-5c29e791952d", 00:19:23.899 "is_configured": true, 00:19:23.899 "data_offset": 0, 00:19:23.899 "data_size": 65536 00:19:23.899 }, 00:19:23.899 { 00:19:23.899 "name": "BaseBdev3", 00:19:23.899 "uuid": "04d571eb-aa74-55ec-b8b5-0c1b3197d1e8", 00:19:23.899 "is_configured": true, 00:19:23.899 "data_offset": 0, 00:19:23.899 "data_size": 65536 00:19:23.899 }, 00:19:23.899 { 00:19:23.899 "name": "BaseBdev4", 00:19:23.899 "uuid": "4918a91a-5708-590f-bcdb-ba981e4866fe", 00:19:23.899 "is_configured": true, 00:19:23.899 "data_offset": 0, 00:19:23.899 "data_size": 65536 00:19:23.899 } 00:19:23.899 ] 00:19:23.899 }' 00:19:23.899 12:18:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:23.899 12:18:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:24.415 120.00 IOPS, 360.00 MiB/s [2024-11-25T12:18:20.506Z] 12:18:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:24.415 12:18:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:24.416 12:18:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:24.416 12:18:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:24.416 12:18:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:24.416 12:18:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.416 12:18:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.416 12:18:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.416 12:18:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:24.416 12:18:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.416 12:18:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:24.416 "name": "raid_bdev1", 00:19:24.416 "uuid": "0f16ede4-46e9-4c33-a1de-1aa2b09a998c", 00:19:24.416 "strip_size_kb": 0, 00:19:24.416 "state": "online", 00:19:24.416 "raid_level": "raid1", 00:19:24.416 "superblock": false, 00:19:24.416 "num_base_bdevs": 4, 00:19:24.416 "num_base_bdevs_discovered": 3, 00:19:24.416 "num_base_bdevs_operational": 3, 00:19:24.416 "base_bdevs_list": [ 00:19:24.416 { 00:19:24.416 "name": null, 00:19:24.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.416 "is_configured": false, 00:19:24.416 "data_offset": 0, 00:19:24.416 "data_size": 65536 00:19:24.416 }, 00:19:24.416 { 00:19:24.416 "name": "BaseBdev2", 00:19:24.416 "uuid": "444ee926-3b6c-5122-84a6-5c29e791952d", 00:19:24.416 "is_configured": true, 00:19:24.416 "data_offset": 0, 00:19:24.416 "data_size": 65536 00:19:24.416 }, 00:19:24.416 { 00:19:24.416 "name": "BaseBdev3", 00:19:24.416 "uuid": "04d571eb-aa74-55ec-b8b5-0c1b3197d1e8", 00:19:24.416 "is_configured": true, 00:19:24.416 "data_offset": 0, 00:19:24.416 "data_size": 65536 00:19:24.416 }, 00:19:24.416 { 00:19:24.416 "name": "BaseBdev4", 00:19:24.416 "uuid": "4918a91a-5708-590f-bcdb-ba981e4866fe", 00:19:24.416 "is_configured": true, 00:19:24.416 "data_offset": 0, 00:19:24.416 "data_size": 65536 00:19:24.416 } 00:19:24.416 ] 00:19:24.416 }' 00:19:24.416 12:18:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:24.416 12:18:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:24.416 12:18:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:24.416 12:18:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:24.416 12:18:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:24.416 12:18:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.416 12:18:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:24.416 [2024-11-25 12:18:20.458960] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:24.673 12:18:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.673 12:18:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:24.673 [2024-11-25 12:18:20.546157] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:19:24.673 [2024-11-25 12:18:20.548872] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:24.673 [2024-11-25 12:18:20.667710] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:24.673 [2024-11-25 12:18:20.669423] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:24.931 [2024-11-25 12:18:20.873546] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:24.931 [2024-11-25 12:18:20.873940] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:25.188 117.67 IOPS, 353.00 MiB/s [2024-11-25T12:18:21.279Z] [2024-11-25 12:18:21.098686] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:25.188 [2024-11-25 12:18:21.099356] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:25.446 [2024-11-25 12:18:21.313229] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:25.447 [2024-11-25 12:18:21.314129] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:25.447 12:18:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:25.447 12:18:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:25.447 12:18:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:25.447 12:18:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:25.447 12:18:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:25.447 12:18:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.447 12:18:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:25.447 12:18:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.447 12:18:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:25.703 12:18:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.703 12:18:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:25.703 "name": "raid_bdev1", 00:19:25.703 "uuid": "0f16ede4-46e9-4c33-a1de-1aa2b09a998c", 00:19:25.703 "strip_size_kb": 0, 00:19:25.703 "state": "online", 00:19:25.703 "raid_level": "raid1", 00:19:25.703 "superblock": false, 00:19:25.703 "num_base_bdevs": 4, 00:19:25.703 "num_base_bdevs_discovered": 4, 00:19:25.703 "num_base_bdevs_operational": 4, 00:19:25.703 "process": { 00:19:25.703 "type": "rebuild", 00:19:25.703 "target": "spare", 00:19:25.703 "progress": { 00:19:25.703 "blocks": 10240, 00:19:25.703 "percent": 15 00:19:25.703 } 00:19:25.703 }, 00:19:25.703 "base_bdevs_list": [ 00:19:25.703 { 00:19:25.703 "name": "spare", 00:19:25.703 "uuid": "79ddca21-9dae-5a11-b59a-a706dd20664f", 00:19:25.703 "is_configured": true, 00:19:25.703 "data_offset": 0, 00:19:25.703 "data_size": 65536 00:19:25.703 }, 00:19:25.703 { 00:19:25.703 "name": "BaseBdev2", 00:19:25.703 "uuid": "444ee926-3b6c-5122-84a6-5c29e791952d", 00:19:25.703 "is_configured": true, 00:19:25.703 "data_offset": 0, 00:19:25.703 "data_size": 65536 00:19:25.703 }, 00:19:25.703 { 00:19:25.703 "name": "BaseBdev3", 00:19:25.703 "uuid": "04d571eb-aa74-55ec-b8b5-0c1b3197d1e8", 00:19:25.703 "is_configured": true, 00:19:25.703 "data_offset": 0, 00:19:25.703 "data_size": 65536 00:19:25.703 }, 00:19:25.703 { 00:19:25.703 "name": "BaseBdev4", 00:19:25.703 "uuid": "4918a91a-5708-590f-bcdb-ba981e4866fe", 00:19:25.703 "is_configured": true, 00:19:25.703 "data_offset": 0, 00:19:25.703 "data_size": 65536 00:19:25.703 } 00:19:25.703 ] 00:19:25.703 }' 00:19:25.703 12:18:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:25.703 12:18:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:25.703 12:18:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:25.703 12:18:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:25.703 12:18:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:19:25.703 12:18:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:19:25.703 12:18:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:25.703 12:18:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:19:25.703 12:18:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:19:25.703 12:18:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.703 12:18:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:25.703 [2024-11-25 12:18:21.697143] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:25.961 [2024-11-25 12:18:21.826555] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:25.961 [2024-11-25 12:18:21.862636] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:19:25.961 [2024-11-25 12:18:21.862679] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:19:25.961 12:18:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.961 12:18:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:19:25.961 12:18:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:19:25.961 12:18:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:25.961 12:18:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:25.961 12:18:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:25.961 12:18:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:25.961 12:18:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:25.961 12:18:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:25.961 12:18:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.961 12:18:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.961 12:18:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:25.961 12:18:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.961 12:18:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:25.961 "name": "raid_bdev1", 00:19:25.961 "uuid": "0f16ede4-46e9-4c33-a1de-1aa2b09a998c", 00:19:25.961 "strip_size_kb": 0, 00:19:25.961 "state": "online", 00:19:25.961 "raid_level": "raid1", 00:19:25.961 "superblock": false, 00:19:25.961 "num_base_bdevs": 4, 00:19:25.961 "num_base_bdevs_discovered": 3, 00:19:25.961 "num_base_bdevs_operational": 3, 00:19:25.961 "process": { 00:19:25.961 "type": "rebuild", 00:19:25.961 "target": "spare", 00:19:25.961 "progress": { 00:19:25.961 "blocks": 16384, 00:19:25.961 "percent": 25 00:19:25.961 } 00:19:25.961 }, 00:19:25.961 "base_bdevs_list": [ 00:19:25.961 { 00:19:25.961 "name": "spare", 00:19:25.961 "uuid": "79ddca21-9dae-5a11-b59a-a706dd20664f", 00:19:25.961 "is_configured": true, 00:19:25.962 "data_offset": 0, 00:19:25.962 "data_size": 65536 00:19:25.962 }, 00:19:25.962 { 00:19:25.962 "name": null, 00:19:25.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.962 "is_configured": false, 00:19:25.962 "data_offset": 0, 00:19:25.962 "data_size": 65536 00:19:25.962 }, 00:19:25.962 { 00:19:25.962 "name": "BaseBdev3", 00:19:25.962 "uuid": "04d571eb-aa74-55ec-b8b5-0c1b3197d1e8", 00:19:25.962 "is_configured": true, 00:19:25.962 "data_offset": 0, 00:19:25.962 "data_size": 65536 00:19:25.962 }, 00:19:25.962 { 00:19:25.962 "name": "BaseBdev4", 00:19:25.962 "uuid": "4918a91a-5708-590f-bcdb-ba981e4866fe", 00:19:25.962 "is_configured": true, 00:19:25.962 "data_offset": 0, 00:19:25.962 "data_size": 65536 00:19:25.962 } 00:19:25.962 ] 00:19:25.962 }' 00:19:25.962 12:18:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:25.962 12:18:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:25.962 12:18:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:25.962 105.00 IOPS, 315.00 MiB/s [2024-11-25T12:18:22.053Z] 12:18:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:25.962 12:18:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=521 00:19:25.962 12:18:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:25.962 12:18:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:25.962 12:18:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:25.962 12:18:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:25.962 12:18:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:25.962 12:18:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:25.962 12:18:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.962 12:18:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.962 12:18:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:25.962 12:18:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:26.219 12:18:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.219 12:18:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:26.219 "name": "raid_bdev1", 00:19:26.219 "uuid": "0f16ede4-46e9-4c33-a1de-1aa2b09a998c", 00:19:26.219 "strip_size_kb": 0, 00:19:26.219 "state": "online", 00:19:26.219 "raid_level": "raid1", 00:19:26.219 "superblock": false, 00:19:26.219 "num_base_bdevs": 4, 00:19:26.219 "num_base_bdevs_discovered": 3, 00:19:26.219 "num_base_bdevs_operational": 3, 00:19:26.219 "process": { 00:19:26.219 "type": "rebuild", 00:19:26.219 "target": "spare", 00:19:26.219 "progress": { 00:19:26.219 "blocks": 18432, 00:19:26.219 "percent": 28 00:19:26.219 } 00:19:26.219 }, 00:19:26.219 "base_bdevs_list": [ 00:19:26.219 { 00:19:26.219 "name": "spare", 00:19:26.219 "uuid": "79ddca21-9dae-5a11-b59a-a706dd20664f", 00:19:26.219 "is_configured": true, 00:19:26.219 "data_offset": 0, 00:19:26.219 "data_size": 65536 00:19:26.219 }, 00:19:26.219 { 00:19:26.219 "name": null, 00:19:26.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:26.219 "is_configured": false, 00:19:26.219 "data_offset": 0, 00:19:26.219 "data_size": 65536 00:19:26.219 }, 00:19:26.219 { 00:19:26.219 "name": "BaseBdev3", 00:19:26.219 "uuid": "04d571eb-aa74-55ec-b8b5-0c1b3197d1e8", 00:19:26.219 "is_configured": true, 00:19:26.219 "data_offset": 0, 00:19:26.219 "data_size": 65536 00:19:26.219 }, 00:19:26.219 { 00:19:26.219 "name": "BaseBdev4", 00:19:26.219 "uuid": "4918a91a-5708-590f-bcdb-ba981e4866fe", 00:19:26.219 "is_configured": true, 00:19:26.219 "data_offset": 0, 00:19:26.219 "data_size": 65536 00:19:26.219 } 00:19:26.219 ] 00:19:26.219 }' 00:19:26.219 12:18:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:26.219 12:18:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:26.219 [2024-11-25 12:18:22.142726] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:19:26.219 12:18:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:26.219 12:18:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:26.219 12:18:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:26.478 [2024-11-25 12:18:22.371806] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:19:26.478 [2024-11-25 12:18:22.372468] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:19:26.811 [2024-11-25 12:18:22.699572] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:19:27.089 [2024-11-25 12:18:22.932793] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:19:27.089 96.20 IOPS, 288.60 MiB/s [2024-11-25T12:18:23.180Z] [2024-11-25 12:18:23.164068] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:19:27.348 12:18:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:27.348 12:18:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:27.348 12:18:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:27.348 12:18:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:27.348 12:18:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:27.348 12:18:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:27.348 12:18:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.348 12:18:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:27.348 12:18:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.348 12:18:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:27.348 12:18:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.348 12:18:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:27.348 "name": "raid_bdev1", 00:19:27.348 "uuid": "0f16ede4-46e9-4c33-a1de-1aa2b09a998c", 00:19:27.348 "strip_size_kb": 0, 00:19:27.348 "state": "online", 00:19:27.348 "raid_level": "raid1", 00:19:27.348 "superblock": false, 00:19:27.348 "num_base_bdevs": 4, 00:19:27.348 "num_base_bdevs_discovered": 3, 00:19:27.348 "num_base_bdevs_operational": 3, 00:19:27.348 "process": { 00:19:27.348 "type": "rebuild", 00:19:27.348 "target": "spare", 00:19:27.348 "progress": { 00:19:27.348 "blocks": 32768, 00:19:27.348 "percent": 50 00:19:27.348 } 00:19:27.348 }, 00:19:27.348 "base_bdevs_list": [ 00:19:27.348 { 00:19:27.348 "name": "spare", 00:19:27.348 "uuid": "79ddca21-9dae-5a11-b59a-a706dd20664f", 00:19:27.348 "is_configured": true, 00:19:27.348 "data_offset": 0, 00:19:27.348 "data_size": 65536 00:19:27.348 }, 00:19:27.348 { 00:19:27.348 "name": null, 00:19:27.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.348 "is_configured": false, 00:19:27.348 "data_offset": 0, 00:19:27.348 "data_size": 65536 00:19:27.348 }, 00:19:27.348 { 00:19:27.348 "name": "BaseBdev3", 00:19:27.348 "uuid": "04d571eb-aa74-55ec-b8b5-0c1b3197d1e8", 00:19:27.348 "is_configured": true, 00:19:27.348 "data_offset": 0, 00:19:27.348 "data_size": 65536 00:19:27.348 }, 00:19:27.348 { 00:19:27.348 "name": "BaseBdev4", 00:19:27.348 "uuid": "4918a91a-5708-590f-bcdb-ba981e4866fe", 00:19:27.348 "is_configured": true, 00:19:27.348 "data_offset": 0, 00:19:27.348 "data_size": 65536 00:19:27.348 } 00:19:27.348 ] 00:19:27.348 }' 00:19:27.348 12:18:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:27.348 12:18:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:27.348 12:18:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:27.348 12:18:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:27.348 12:18:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:27.914 [2024-11-25 12:18:23.938283] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:19:28.431 91.50 IOPS, 274.50 MiB/s [2024-11-25T12:18:24.522Z] 12:18:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:28.431 12:18:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:28.431 12:18:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:28.431 12:18:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:28.431 12:18:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:28.431 12:18:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:28.431 12:18:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:28.431 12:18:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.431 12:18:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:28.431 12:18:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:28.431 12:18:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.431 12:18:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:28.431 "name": "raid_bdev1", 00:19:28.431 "uuid": "0f16ede4-46e9-4c33-a1de-1aa2b09a998c", 00:19:28.431 "strip_size_kb": 0, 00:19:28.431 "state": "online", 00:19:28.431 "raid_level": "raid1", 00:19:28.431 "superblock": false, 00:19:28.431 "num_base_bdevs": 4, 00:19:28.431 "num_base_bdevs_discovered": 3, 00:19:28.431 "num_base_bdevs_operational": 3, 00:19:28.431 "process": { 00:19:28.431 "type": "rebuild", 00:19:28.431 "target": "spare", 00:19:28.432 "progress": { 00:19:28.432 "blocks": 51200, 00:19:28.432 "percent": 78 00:19:28.432 } 00:19:28.432 }, 00:19:28.432 "base_bdevs_list": [ 00:19:28.432 { 00:19:28.432 "name": "spare", 00:19:28.432 "uuid": "79ddca21-9dae-5a11-b59a-a706dd20664f", 00:19:28.432 "is_configured": true, 00:19:28.432 "data_offset": 0, 00:19:28.432 "data_size": 65536 00:19:28.432 }, 00:19:28.432 { 00:19:28.432 "name": null, 00:19:28.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.432 "is_configured": false, 00:19:28.432 "data_offset": 0, 00:19:28.432 "data_size": 65536 00:19:28.432 }, 00:19:28.432 { 00:19:28.432 "name": "BaseBdev3", 00:19:28.432 "uuid": "04d571eb-aa74-55ec-b8b5-0c1b3197d1e8", 00:19:28.432 "is_configured": true, 00:19:28.432 "data_offset": 0, 00:19:28.432 "data_size": 65536 00:19:28.432 }, 00:19:28.432 { 00:19:28.432 "name": "BaseBdev4", 00:19:28.432 "uuid": "4918a91a-5708-590f-bcdb-ba981e4866fe", 00:19:28.432 "is_configured": true, 00:19:28.432 "data_offset": 0, 00:19:28.432 "data_size": 65536 00:19:28.432 } 00:19:28.432 ] 00:19:28.432 }' 00:19:28.432 12:18:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:28.432 12:18:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:28.432 12:18:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:28.432 12:18:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:28.432 12:18:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:28.999 83.71 IOPS, 251.14 MiB/s [2024-11-25T12:18:25.090Z] [2024-11-25 12:18:25.043437] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:29.257 [2024-11-25 12:18:25.151272] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:29.257 [2024-11-25 12:18:25.156330] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:29.516 12:18:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:29.516 12:18:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:29.516 12:18:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:29.516 12:18:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:29.516 12:18:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:29.516 12:18:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:29.516 12:18:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:29.516 12:18:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.516 12:18:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.516 12:18:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:29.516 12:18:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.516 12:18:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:29.516 "name": "raid_bdev1", 00:19:29.516 "uuid": "0f16ede4-46e9-4c33-a1de-1aa2b09a998c", 00:19:29.516 "strip_size_kb": 0, 00:19:29.516 "state": "online", 00:19:29.516 "raid_level": "raid1", 00:19:29.516 "superblock": false, 00:19:29.516 "num_base_bdevs": 4, 00:19:29.516 "num_base_bdevs_discovered": 3, 00:19:29.516 "num_base_bdevs_operational": 3, 00:19:29.516 "base_bdevs_list": [ 00:19:29.516 { 00:19:29.516 "name": "spare", 00:19:29.516 "uuid": "79ddca21-9dae-5a11-b59a-a706dd20664f", 00:19:29.516 "is_configured": true, 00:19:29.516 "data_offset": 0, 00:19:29.516 "data_size": 65536 00:19:29.516 }, 00:19:29.516 { 00:19:29.516 "name": null, 00:19:29.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.516 "is_configured": false, 00:19:29.516 "data_offset": 0, 00:19:29.516 "data_size": 65536 00:19:29.516 }, 00:19:29.516 { 00:19:29.516 "name": "BaseBdev3", 00:19:29.516 "uuid": "04d571eb-aa74-55ec-b8b5-0c1b3197d1e8", 00:19:29.516 "is_configured": true, 00:19:29.516 "data_offset": 0, 00:19:29.516 "data_size": 65536 00:19:29.516 }, 00:19:29.516 { 00:19:29.516 "name": "BaseBdev4", 00:19:29.516 "uuid": "4918a91a-5708-590f-bcdb-ba981e4866fe", 00:19:29.516 "is_configured": true, 00:19:29.516 "data_offset": 0, 00:19:29.516 "data_size": 65536 00:19:29.516 } 00:19:29.516 ] 00:19:29.516 }' 00:19:29.516 12:18:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:29.776 12:18:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:29.776 12:18:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:29.776 12:18:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:29.776 12:18:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:19:29.776 12:18:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:29.776 12:18:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:29.776 12:18:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:29.776 12:18:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:29.776 12:18:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:29.776 12:18:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:29.776 12:18:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.776 12:18:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.776 12:18:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:29.776 12:18:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.776 12:18:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:29.776 "name": "raid_bdev1", 00:19:29.776 "uuid": "0f16ede4-46e9-4c33-a1de-1aa2b09a998c", 00:19:29.776 "strip_size_kb": 0, 00:19:29.776 "state": "online", 00:19:29.776 "raid_level": "raid1", 00:19:29.776 "superblock": false, 00:19:29.776 "num_base_bdevs": 4, 00:19:29.776 "num_base_bdevs_discovered": 3, 00:19:29.776 "num_base_bdevs_operational": 3, 00:19:29.776 "base_bdevs_list": [ 00:19:29.776 { 00:19:29.776 "name": "spare", 00:19:29.776 "uuid": "79ddca21-9dae-5a11-b59a-a706dd20664f", 00:19:29.776 "is_configured": true, 00:19:29.776 "data_offset": 0, 00:19:29.776 "data_size": 65536 00:19:29.776 }, 00:19:29.776 { 00:19:29.776 "name": null, 00:19:29.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.776 "is_configured": false, 00:19:29.776 "data_offset": 0, 00:19:29.776 "data_size": 65536 00:19:29.776 }, 00:19:29.776 { 00:19:29.776 "name": "BaseBdev3", 00:19:29.776 "uuid": "04d571eb-aa74-55ec-b8b5-0c1b3197d1e8", 00:19:29.776 "is_configured": true, 00:19:29.776 "data_offset": 0, 00:19:29.776 "data_size": 65536 00:19:29.776 }, 00:19:29.776 { 00:19:29.776 "name": "BaseBdev4", 00:19:29.776 "uuid": "4918a91a-5708-590f-bcdb-ba981e4866fe", 00:19:29.776 "is_configured": true, 00:19:29.776 "data_offset": 0, 00:19:29.776 "data_size": 65536 00:19:29.776 } 00:19:29.776 ] 00:19:29.776 }' 00:19:29.776 12:18:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:29.776 12:18:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:29.776 12:18:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:29.776 12:18:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:29.776 12:18:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:29.776 12:18:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:29.776 12:18:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:29.776 12:18:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:29.776 12:18:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:29.776 12:18:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:29.776 12:18:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:29.776 12:18:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:29.776 12:18:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:29.776 12:18:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:29.776 12:18:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:29.776 12:18:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.776 12:18:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.776 12:18:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:30.035 12:18:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.035 12:18:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:30.035 "name": "raid_bdev1", 00:19:30.035 "uuid": "0f16ede4-46e9-4c33-a1de-1aa2b09a998c", 00:19:30.035 "strip_size_kb": 0, 00:19:30.035 "state": "online", 00:19:30.035 "raid_level": "raid1", 00:19:30.035 "superblock": false, 00:19:30.035 "num_base_bdevs": 4, 00:19:30.035 "num_base_bdevs_discovered": 3, 00:19:30.035 "num_base_bdevs_operational": 3, 00:19:30.035 "base_bdevs_list": [ 00:19:30.035 { 00:19:30.035 "name": "spare", 00:19:30.035 "uuid": "79ddca21-9dae-5a11-b59a-a706dd20664f", 00:19:30.035 "is_configured": true, 00:19:30.035 "data_offset": 0, 00:19:30.035 "data_size": 65536 00:19:30.035 }, 00:19:30.035 { 00:19:30.035 "name": null, 00:19:30.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.035 "is_configured": false, 00:19:30.035 "data_offset": 0, 00:19:30.035 "data_size": 65536 00:19:30.035 }, 00:19:30.035 { 00:19:30.035 "name": "BaseBdev3", 00:19:30.035 "uuid": "04d571eb-aa74-55ec-b8b5-0c1b3197d1e8", 00:19:30.035 "is_configured": true, 00:19:30.035 "data_offset": 0, 00:19:30.035 "data_size": 65536 00:19:30.035 }, 00:19:30.035 { 00:19:30.035 "name": "BaseBdev4", 00:19:30.035 "uuid": "4918a91a-5708-590f-bcdb-ba981e4866fe", 00:19:30.035 "is_configured": true, 00:19:30.035 "data_offset": 0, 00:19:30.035 "data_size": 65536 00:19:30.035 } 00:19:30.035 ] 00:19:30.035 }' 00:19:30.035 12:18:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:30.035 12:18:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:30.294 77.62 IOPS, 232.88 MiB/s [2024-11-25T12:18:26.385Z] 12:18:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:30.294 12:18:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.294 12:18:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:30.294 [2024-11-25 12:18:26.348771] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:30.294 [2024-11-25 12:18:26.348940] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:30.294 00:19:30.294 Latency(us) 00:19:30.294 [2024-11-25T12:18:26.385Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:30.294 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:19:30.294 raid_bdev1 : 8.39 75.77 227.32 0.00 0.00 17342.32 290.44 122969.37 00:19:30.294 [2024-11-25T12:18:26.385Z] =================================================================================================================== 00:19:30.294 [2024-11-25T12:18:26.385Z] Total : 75.77 227.32 0.00 0.00 17342.32 290.44 122969.37 00:19:30.554 [2024-11-25 12:18:26.392863] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:30.554 [2024-11-25 12:18:26.393067] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:30.554 [2024-11-25 12:18:26.393261] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:30.554 [2024-11-25 12:18:26.393511] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:30.554 { 00:19:30.554 "results": [ 00:19:30.554 { 00:19:30.554 "job": "raid_bdev1", 00:19:30.554 "core_mask": "0x1", 00:19:30.554 "workload": "randrw", 00:19:30.554 "percentage": 50, 00:19:30.554 "status": "finished", 00:19:30.554 "queue_depth": 2, 00:19:30.554 "io_size": 3145728, 00:19:30.554 "runtime": 8.393362, 00:19:30.554 "iops": 75.77416534637729, 00:19:30.554 "mibps": 227.32249603913186, 00:19:30.554 "io_failed": 0, 00:19:30.554 "io_timeout": 0, 00:19:30.554 "avg_latency_us": 17342.324070897652, 00:19:30.554 "min_latency_us": 290.44363636363636, 00:19:30.554 "max_latency_us": 122969.36727272728 00:19:30.554 } 00:19:30.554 ], 00:19:30.554 "core_count": 1 00:19:30.554 } 00:19:30.554 12:18:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.554 12:18:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.554 12:18:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.554 12:18:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:30.554 12:18:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:19:30.554 12:18:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.554 12:18:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:30.554 12:18:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:30.554 12:18:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:19:30.554 12:18:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:19:30.554 12:18:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:30.554 12:18:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:19:30.554 12:18:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:30.554 12:18:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:30.554 12:18:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:30.554 12:18:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:19:30.554 12:18:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:30.554 12:18:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:30.554 12:18:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:19:30.892 /dev/nbd0 00:19:30.892 12:18:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:30.892 12:18:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:30.892 12:18:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:30.892 12:18:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:19:30.892 12:18:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:30.892 12:18:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:30.892 12:18:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:30.892 12:18:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:19:30.892 12:18:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:30.892 12:18:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:30.892 12:18:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:30.892 1+0 records in 00:19:30.892 1+0 records out 00:19:30.892 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000343886 s, 11.9 MB/s 00:19:30.892 12:18:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:30.892 12:18:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:19:30.892 12:18:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:30.892 12:18:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:30.892 12:18:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:19:30.892 12:18:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:30.892 12:18:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:30.892 12:18:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:19:30.892 12:18:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:19:30.892 12:18:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:19:30.892 12:18:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:19:30.892 12:18:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:19:30.892 12:18:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:19:30.892 12:18:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:30.892 12:18:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:19:30.892 12:18:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:30.892 12:18:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:19:30.892 12:18:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:30.892 12:18:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:19:30.892 12:18:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:30.892 12:18:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:30.893 12:18:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:19:31.151 /dev/nbd1 00:19:31.151 12:18:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:31.151 12:18:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:31.151 12:18:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:31.151 12:18:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:19:31.151 12:18:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:31.151 12:18:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:31.151 12:18:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:31.151 12:18:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:19:31.151 12:18:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:31.151 12:18:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:31.151 12:18:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:31.151 1+0 records in 00:19:31.151 1+0 records out 00:19:31.151 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000443538 s, 9.2 MB/s 00:19:31.151 12:18:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:31.151 12:18:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:19:31.151 12:18:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:31.151 12:18:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:31.151 12:18:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:19:31.151 12:18:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:31.151 12:18:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:31.151 12:18:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:19:31.151 12:18:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:19:31.151 12:18:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:31.151 12:18:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:19:31.151 12:18:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:31.151 12:18:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:19:31.151 12:18:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:31.151 12:18:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:31.718 12:18:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:31.718 12:18:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:31.718 12:18:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:31.718 12:18:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:31.718 12:18:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:31.718 12:18:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:31.718 12:18:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:19:31.718 12:18:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:19:31.718 12:18:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:19:31.718 12:18:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:19:31.718 12:18:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:19:31.718 12:18:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:31.718 12:18:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:19:31.718 12:18:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:31.718 12:18:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:19:31.718 12:18:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:31.718 12:18:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:19:31.718 12:18:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:31.719 12:18:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:31.719 12:18:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:19:31.977 /dev/nbd1 00:19:31.977 12:18:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:31.977 12:18:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:31.977 12:18:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:31.977 12:18:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:19:31.977 12:18:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:31.977 12:18:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:31.977 12:18:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:31.977 12:18:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:19:31.977 12:18:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:31.977 12:18:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:31.977 12:18:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:31.977 1+0 records in 00:19:31.978 1+0 records out 00:19:31.978 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000270798 s, 15.1 MB/s 00:19:31.978 12:18:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:31.978 12:18:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:19:31.978 12:18:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:31.978 12:18:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:31.978 12:18:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:19:31.978 12:18:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:31.978 12:18:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:31.978 12:18:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:19:31.978 12:18:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:19:31.978 12:18:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:31.978 12:18:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:19:31.978 12:18:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:31.978 12:18:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:19:31.978 12:18:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:31.978 12:18:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:32.236 12:18:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:32.494 12:18:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:32.494 12:18:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:32.494 12:18:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:32.494 12:18:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:32.494 12:18:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:32.494 12:18:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:19:32.494 12:18:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:19:32.494 12:18:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:32.494 12:18:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:32.494 12:18:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:32.494 12:18:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:32.494 12:18:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:19:32.494 12:18:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:32.494 12:18:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:32.752 12:18:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:32.752 12:18:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:32.752 12:18:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:32.752 12:18:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:32.752 12:18:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:32.752 12:18:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:32.752 12:18:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:19:32.752 12:18:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:19:32.752 12:18:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:19:32.752 12:18:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 79076 00:19:32.752 12:18:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 79076 ']' 00:19:32.752 12:18:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 79076 00:19:32.752 12:18:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:19:32.752 12:18:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:32.752 12:18:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79076 00:19:32.752 killing process with pid 79076 00:19:32.752 Received shutdown signal, test time was about 10.695356 seconds 00:19:32.752 00:19:32.752 Latency(us) 00:19:32.752 [2024-11-25T12:18:28.843Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.752 [2024-11-25T12:18:28.843Z] =================================================================================================================== 00:19:32.752 [2024-11-25T12:18:28.843Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:32.752 12:18:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:32.752 12:18:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:32.752 12:18:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79076' 00:19:32.752 12:18:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 79076 00:19:32.752 12:18:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 79076 00:19:32.752 [2024-11-25 12:18:28.674678] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:33.011 [2024-11-25 12:18:29.055576] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:34.388 12:18:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:19:34.388 00:19:34.388 real 0m14.393s 00:19:34.388 user 0m18.929s 00:19:34.388 sys 0m1.832s 00:19:34.388 12:18:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:34.388 12:18:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:34.388 ************************************ 00:19:34.388 END TEST raid_rebuild_test_io 00:19:34.388 ************************************ 00:19:34.388 12:18:30 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:19:34.388 12:18:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:34.388 12:18:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:34.388 12:18:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:34.388 ************************************ 00:19:34.388 START TEST raid_rebuild_test_sb_io 00:19:34.388 ************************************ 00:19:34.388 12:18:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:19:34.388 12:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:34.388 12:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:19:34.388 12:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:34.388 12:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:19:34.388 12:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:34.388 12:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:34.388 12:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:34.388 12:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:34.388 12:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:34.388 12:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:34.388 12:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:34.388 12:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:34.388 12:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:34.388 12:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:19:34.388 12:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:34.388 12:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:34.388 12:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:19:34.388 12:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:34.388 12:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:34.388 12:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:34.388 12:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:34.388 12:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:34.388 12:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:34.388 12:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:34.388 12:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:34.388 12:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:34.388 12:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:34.388 12:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:34.388 12:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:34.388 12:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:34.388 12:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79492 00:19:34.388 12:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:34.388 12:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79492 00:19:34.388 12:18:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 79492 ']' 00:19:34.388 12:18:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:34.388 12:18:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:34.388 12:18:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:34.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:34.389 12:18:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:34.389 12:18:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:34.389 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:34.389 Zero copy mechanism will not be used. 00:19:34.389 [2024-11-25 12:18:30.308392] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:19:34.389 [2024-11-25 12:18:30.308571] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79492 ] 00:19:34.648 [2024-11-25 12:18:30.497518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.648 [2024-11-25 12:18:30.653943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:34.908 [2024-11-25 12:18:30.877686] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:34.908 [2024-11-25 12:18:30.877759] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:35.476 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:35.476 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:19:35.476 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:35.476 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:35.476 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.476 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:35.477 BaseBdev1_malloc 00:19:35.477 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.477 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:35.477 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.477 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:35.477 [2024-11-25 12:18:31.355219] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:35.477 [2024-11-25 12:18:31.355303] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:35.477 [2024-11-25 12:18:31.355348] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:35.477 [2024-11-25 12:18:31.355371] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:35.477 [2024-11-25 12:18:31.358104] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:35.477 [2024-11-25 12:18:31.358155] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:35.477 BaseBdev1 00:19:35.477 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.477 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:35.477 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:35.477 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.477 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:35.477 BaseBdev2_malloc 00:19:35.477 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.477 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:35.477 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.477 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:35.477 [2024-11-25 12:18:31.403037] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:35.477 [2024-11-25 12:18:31.403111] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:35.477 [2024-11-25 12:18:31.403140] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:35.477 [2024-11-25 12:18:31.403169] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:35.477 [2024-11-25 12:18:31.405821] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:35.477 [2024-11-25 12:18:31.405870] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:35.477 BaseBdev2 00:19:35.477 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.477 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:35.477 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:35.477 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.477 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:35.477 BaseBdev3_malloc 00:19:35.477 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.477 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:19:35.477 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.477 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:35.477 [2024-11-25 12:18:31.469451] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:19:35.477 [2024-11-25 12:18:31.469538] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:35.477 [2024-11-25 12:18:31.469579] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:35.477 [2024-11-25 12:18:31.469599] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:35.477 [2024-11-25 12:18:31.472427] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:35.477 [2024-11-25 12:18:31.472478] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:35.477 BaseBdev3 00:19:35.477 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.477 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:35.477 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:19:35.477 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.477 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:35.477 BaseBdev4_malloc 00:19:35.477 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.477 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:19:35.477 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.477 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:35.477 [2024-11-25 12:18:31.521301] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:19:35.477 [2024-11-25 12:18:31.521394] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:35.477 [2024-11-25 12:18:31.521426] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:19:35.477 [2024-11-25 12:18:31.521445] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:35.477 [2024-11-25 12:18:31.524242] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:35.477 [2024-11-25 12:18:31.524295] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:19:35.477 BaseBdev4 00:19:35.477 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.477 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:19:35.477 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.477 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:35.736 spare_malloc 00:19:35.736 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.736 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:35.736 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.736 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:35.736 spare_delay 00:19:35.736 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.736 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:35.736 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.736 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:35.736 [2024-11-25 12:18:31.583435] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:35.736 [2024-11-25 12:18:31.583508] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:35.736 [2024-11-25 12:18:31.583538] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:35.736 [2024-11-25 12:18:31.583556] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:35.736 [2024-11-25 12:18:31.586396] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:35.736 [2024-11-25 12:18:31.586447] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:35.736 spare 00:19:35.736 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.736 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:19:35.736 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.736 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:35.736 [2024-11-25 12:18:31.591517] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:35.736 [2024-11-25 12:18:31.593964] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:35.736 [2024-11-25 12:18:31.594072] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:35.736 [2024-11-25 12:18:31.594160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:35.736 [2024-11-25 12:18:31.594443] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:35.736 [2024-11-25 12:18:31.594471] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:35.736 [2024-11-25 12:18:31.594814] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:35.736 [2024-11-25 12:18:31.595065] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:35.736 [2024-11-25 12:18:31.595081] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:35.736 [2024-11-25 12:18:31.595295] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:35.736 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.736 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:19:35.736 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:35.736 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:35.736 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:35.736 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:35.736 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:35.736 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:35.736 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:35.736 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:35.736 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:35.736 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.736 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:35.736 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.736 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:35.736 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.736 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:35.736 "name": "raid_bdev1", 00:19:35.736 "uuid": "63f3166c-6656-4832-a315-04b955c46ce3", 00:19:35.736 "strip_size_kb": 0, 00:19:35.736 "state": "online", 00:19:35.736 "raid_level": "raid1", 00:19:35.736 "superblock": true, 00:19:35.736 "num_base_bdevs": 4, 00:19:35.736 "num_base_bdevs_discovered": 4, 00:19:35.736 "num_base_bdevs_operational": 4, 00:19:35.736 "base_bdevs_list": [ 00:19:35.736 { 00:19:35.736 "name": "BaseBdev1", 00:19:35.736 "uuid": "dd7fa971-f438-5449-a72f-66232e9381f2", 00:19:35.736 "is_configured": true, 00:19:35.736 "data_offset": 2048, 00:19:35.736 "data_size": 63488 00:19:35.736 }, 00:19:35.736 { 00:19:35.736 "name": "BaseBdev2", 00:19:35.736 "uuid": "f976ef14-c7bf-5b9a-b71a-7bb24d8c6a1b", 00:19:35.736 "is_configured": true, 00:19:35.736 "data_offset": 2048, 00:19:35.736 "data_size": 63488 00:19:35.736 }, 00:19:35.736 { 00:19:35.736 "name": "BaseBdev3", 00:19:35.736 "uuid": "5e468bfe-d78c-5fc4-81f6-ae5ad965486b", 00:19:35.736 "is_configured": true, 00:19:35.736 "data_offset": 2048, 00:19:35.736 "data_size": 63488 00:19:35.736 }, 00:19:35.736 { 00:19:35.736 "name": "BaseBdev4", 00:19:35.736 "uuid": "2e3e5b9e-d994-5ff4-9a68-188c2978040c", 00:19:35.736 "is_configured": true, 00:19:35.736 "data_offset": 2048, 00:19:35.736 "data_size": 63488 00:19:35.736 } 00:19:35.736 ] 00:19:35.736 }' 00:19:35.736 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:35.736 12:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:35.994 12:18:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:35.994 12:18:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:36.253 12:18:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.253 12:18:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:36.253 [2024-11-25 12:18:32.088117] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:36.253 12:18:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.253 12:18:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:19:36.253 12:18:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:36.253 12:18:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.253 12:18:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.253 12:18:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:36.253 12:18:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.253 12:18:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:19:36.253 12:18:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:19:36.253 12:18:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:36.253 12:18:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:19:36.253 12:18:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.253 12:18:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:36.253 [2024-11-25 12:18:32.167694] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:36.253 12:18:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.253 12:18:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:36.253 12:18:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:36.253 12:18:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:36.253 12:18:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:36.253 12:18:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:36.253 12:18:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:36.253 12:18:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:36.253 12:18:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:36.253 12:18:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:36.253 12:18:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:36.253 12:18:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.253 12:18:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.253 12:18:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:36.253 12:18:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:36.253 12:18:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.253 12:18:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:36.253 "name": "raid_bdev1", 00:19:36.253 "uuid": "63f3166c-6656-4832-a315-04b955c46ce3", 00:19:36.253 "strip_size_kb": 0, 00:19:36.253 "state": "online", 00:19:36.253 "raid_level": "raid1", 00:19:36.253 "superblock": true, 00:19:36.253 "num_base_bdevs": 4, 00:19:36.253 "num_base_bdevs_discovered": 3, 00:19:36.253 "num_base_bdevs_operational": 3, 00:19:36.253 "base_bdevs_list": [ 00:19:36.253 { 00:19:36.253 "name": null, 00:19:36.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:36.253 "is_configured": false, 00:19:36.253 "data_offset": 0, 00:19:36.253 "data_size": 63488 00:19:36.253 }, 00:19:36.253 { 00:19:36.253 "name": "BaseBdev2", 00:19:36.253 "uuid": "f976ef14-c7bf-5b9a-b71a-7bb24d8c6a1b", 00:19:36.253 "is_configured": true, 00:19:36.253 "data_offset": 2048, 00:19:36.253 "data_size": 63488 00:19:36.253 }, 00:19:36.253 { 00:19:36.253 "name": "BaseBdev3", 00:19:36.253 "uuid": "5e468bfe-d78c-5fc4-81f6-ae5ad965486b", 00:19:36.253 "is_configured": true, 00:19:36.253 "data_offset": 2048, 00:19:36.253 "data_size": 63488 00:19:36.253 }, 00:19:36.253 { 00:19:36.253 "name": "BaseBdev4", 00:19:36.253 "uuid": "2e3e5b9e-d994-5ff4-9a68-188c2978040c", 00:19:36.253 "is_configured": true, 00:19:36.253 "data_offset": 2048, 00:19:36.253 "data_size": 63488 00:19:36.253 } 00:19:36.253 ] 00:19:36.253 }' 00:19:36.253 12:18:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:36.253 12:18:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:36.253 [2024-11-25 12:18:32.275916] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:19:36.253 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:36.253 Zero copy mechanism will not be used. 00:19:36.254 Running I/O for 60 seconds... 00:19:36.819 12:18:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:36.819 12:18:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.819 12:18:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:36.820 [2024-11-25 12:18:32.702824] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:36.820 12:18:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.820 12:18:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:36.820 [2024-11-25 12:18:32.797828] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:19:36.820 [2024-11-25 12:18:32.800476] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:37.078 [2024-11-25 12:18:32.920192] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:37.078 [2024-11-25 12:18:32.921857] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:37.078 [2024-11-25 12:18:33.145083] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:37.078 [2024-11-25 12:18:33.145967] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:37.596 114.00 IOPS, 342.00 MiB/s [2024-11-25T12:18:33.687Z] [2024-11-25 12:18:33.596814] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:37.596 [2024-11-25 12:18:33.597642] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:37.855 12:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:37.855 12:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:37.855 12:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:37.855 12:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:37.855 12:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:37.855 12:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.855 12:18:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.855 12:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:37.855 12:18:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:37.855 12:18:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.855 12:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:37.855 "name": "raid_bdev1", 00:19:37.855 "uuid": "63f3166c-6656-4832-a315-04b955c46ce3", 00:19:37.855 "strip_size_kb": 0, 00:19:37.855 "state": "online", 00:19:37.855 "raid_level": "raid1", 00:19:37.855 "superblock": true, 00:19:37.855 "num_base_bdevs": 4, 00:19:37.855 "num_base_bdevs_discovered": 4, 00:19:37.855 "num_base_bdevs_operational": 4, 00:19:37.855 "process": { 00:19:37.855 "type": "rebuild", 00:19:37.855 "target": "spare", 00:19:37.855 "progress": { 00:19:37.855 "blocks": 10240, 00:19:37.855 "percent": 16 00:19:37.855 } 00:19:37.855 }, 00:19:37.855 "base_bdevs_list": [ 00:19:37.855 { 00:19:37.855 "name": "spare", 00:19:37.855 "uuid": "a4f8d22f-2ffe-5dd6-ba69-319f8d005b0d", 00:19:37.855 "is_configured": true, 00:19:37.855 "data_offset": 2048, 00:19:37.855 "data_size": 63488 00:19:37.855 }, 00:19:37.855 { 00:19:37.855 "name": "BaseBdev2", 00:19:37.855 "uuid": "f976ef14-c7bf-5b9a-b71a-7bb24d8c6a1b", 00:19:37.855 "is_configured": true, 00:19:37.855 "data_offset": 2048, 00:19:37.855 "data_size": 63488 00:19:37.855 }, 00:19:37.855 { 00:19:37.855 "name": "BaseBdev3", 00:19:37.855 "uuid": "5e468bfe-d78c-5fc4-81f6-ae5ad965486b", 00:19:37.855 "is_configured": true, 00:19:37.855 "data_offset": 2048, 00:19:37.855 "data_size": 63488 00:19:37.855 }, 00:19:37.855 { 00:19:37.855 "name": "BaseBdev4", 00:19:37.855 "uuid": "2e3e5b9e-d994-5ff4-9a68-188c2978040c", 00:19:37.855 "is_configured": true, 00:19:37.855 "data_offset": 2048, 00:19:37.855 "data_size": 63488 00:19:37.855 } 00:19:37.855 ] 00:19:37.855 }' 00:19:37.855 12:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:37.855 12:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:37.855 12:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:37.855 12:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:37.855 12:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:37.855 12:18:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.855 12:18:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:37.855 [2024-11-25 12:18:33.918774] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:38.115 [2024-11-25 12:18:34.026510] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:38.115 [2024-11-25 12:18:34.032207] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:38.115 [2024-11-25 12:18:34.032267] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:38.115 [2024-11-25 12:18:34.032290] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:38.115 [2024-11-25 12:18:34.048123] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:19:38.115 12:18:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.115 12:18:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:38.115 12:18:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:38.115 12:18:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:38.115 12:18:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:38.115 12:18:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:38.115 12:18:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:38.115 12:18:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:38.115 12:18:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:38.115 12:18:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:38.115 12:18:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:38.115 12:18:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.115 12:18:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.115 12:18:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.115 12:18:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:38.115 12:18:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.115 12:18:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:38.115 "name": "raid_bdev1", 00:19:38.115 "uuid": "63f3166c-6656-4832-a315-04b955c46ce3", 00:19:38.115 "strip_size_kb": 0, 00:19:38.115 "state": "online", 00:19:38.115 "raid_level": "raid1", 00:19:38.115 "superblock": true, 00:19:38.115 "num_base_bdevs": 4, 00:19:38.115 "num_base_bdevs_discovered": 3, 00:19:38.115 "num_base_bdevs_operational": 3, 00:19:38.115 "base_bdevs_list": [ 00:19:38.115 { 00:19:38.115 "name": null, 00:19:38.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:38.115 "is_configured": false, 00:19:38.115 "data_offset": 0, 00:19:38.115 "data_size": 63488 00:19:38.115 }, 00:19:38.115 { 00:19:38.115 "name": "BaseBdev2", 00:19:38.115 "uuid": "f976ef14-c7bf-5b9a-b71a-7bb24d8c6a1b", 00:19:38.115 "is_configured": true, 00:19:38.115 "data_offset": 2048, 00:19:38.115 "data_size": 63488 00:19:38.115 }, 00:19:38.115 { 00:19:38.115 "name": "BaseBdev3", 00:19:38.115 "uuid": "5e468bfe-d78c-5fc4-81f6-ae5ad965486b", 00:19:38.115 "is_configured": true, 00:19:38.115 "data_offset": 2048, 00:19:38.115 "data_size": 63488 00:19:38.115 }, 00:19:38.115 { 00:19:38.115 "name": "BaseBdev4", 00:19:38.115 "uuid": "2e3e5b9e-d994-5ff4-9a68-188c2978040c", 00:19:38.115 "is_configured": true, 00:19:38.115 "data_offset": 2048, 00:19:38.115 "data_size": 63488 00:19:38.115 } 00:19:38.115 ] 00:19:38.115 }' 00:19:38.115 12:18:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:38.115 12:18:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:38.633 120.50 IOPS, 361.50 MiB/s [2024-11-25T12:18:34.724Z] 12:18:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:38.633 12:18:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:38.633 12:18:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:38.633 12:18:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:38.633 12:18:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:38.633 12:18:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.633 12:18:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.633 12:18:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:38.633 12:18:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.633 12:18:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.633 12:18:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:38.633 "name": "raid_bdev1", 00:19:38.633 "uuid": "63f3166c-6656-4832-a315-04b955c46ce3", 00:19:38.633 "strip_size_kb": 0, 00:19:38.633 "state": "online", 00:19:38.633 "raid_level": "raid1", 00:19:38.633 "superblock": true, 00:19:38.633 "num_base_bdevs": 4, 00:19:38.633 "num_base_bdevs_discovered": 3, 00:19:38.633 "num_base_bdevs_operational": 3, 00:19:38.633 "base_bdevs_list": [ 00:19:38.633 { 00:19:38.633 "name": null, 00:19:38.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:38.633 "is_configured": false, 00:19:38.633 "data_offset": 0, 00:19:38.633 "data_size": 63488 00:19:38.633 }, 00:19:38.633 { 00:19:38.633 "name": "BaseBdev2", 00:19:38.633 "uuid": "f976ef14-c7bf-5b9a-b71a-7bb24d8c6a1b", 00:19:38.633 "is_configured": true, 00:19:38.633 "data_offset": 2048, 00:19:38.633 "data_size": 63488 00:19:38.633 }, 00:19:38.633 { 00:19:38.633 "name": "BaseBdev3", 00:19:38.633 "uuid": "5e468bfe-d78c-5fc4-81f6-ae5ad965486b", 00:19:38.633 "is_configured": true, 00:19:38.633 "data_offset": 2048, 00:19:38.633 "data_size": 63488 00:19:38.633 }, 00:19:38.633 { 00:19:38.633 "name": "BaseBdev4", 00:19:38.633 "uuid": "2e3e5b9e-d994-5ff4-9a68-188c2978040c", 00:19:38.633 "is_configured": true, 00:19:38.633 "data_offset": 2048, 00:19:38.633 "data_size": 63488 00:19:38.633 } 00:19:38.633 ] 00:19:38.633 }' 00:19:38.633 12:18:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:38.633 12:18:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:38.633 12:18:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:38.892 12:18:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:38.892 12:18:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:38.892 12:18:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.892 12:18:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:38.892 [2024-11-25 12:18:34.774465] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:38.892 12:18:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.892 12:18:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:38.892 [2024-11-25 12:18:34.838937] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:19:38.892 [2024-11-25 12:18:34.841507] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:38.892 [2024-11-25 12:18:34.953196] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:38.892 [2024-11-25 12:18:34.953882] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:39.151 [2024-11-25 12:18:35.075828] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:39.151 [2024-11-25 12:18:35.076220] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:39.410 129.00 IOPS, 387.00 MiB/s [2024-11-25T12:18:35.501Z] [2024-11-25 12:18:35.311456] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:39.410 [2024-11-25 12:18:35.432304] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:39.669 [2024-11-25 12:18:35.685222] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:19:39.928 12:18:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:39.928 12:18:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:39.928 12:18:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:39.928 12:18:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:39.928 12:18:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:39.928 12:18:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:39.928 12:18:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:39.928 12:18:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.928 12:18:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:39.928 12:18:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.928 12:18:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:39.928 "name": "raid_bdev1", 00:19:39.928 "uuid": "63f3166c-6656-4832-a315-04b955c46ce3", 00:19:39.928 "strip_size_kb": 0, 00:19:39.928 "state": "online", 00:19:39.928 "raid_level": "raid1", 00:19:39.928 "superblock": true, 00:19:39.928 "num_base_bdevs": 4, 00:19:39.928 "num_base_bdevs_discovered": 4, 00:19:39.928 "num_base_bdevs_operational": 4, 00:19:39.928 "process": { 00:19:39.928 "type": "rebuild", 00:19:39.928 "target": "spare", 00:19:39.928 "progress": { 00:19:39.928 "blocks": 14336, 00:19:39.928 "percent": 22 00:19:39.928 } 00:19:39.928 }, 00:19:39.928 "base_bdevs_list": [ 00:19:39.928 { 00:19:39.928 "name": "spare", 00:19:39.928 "uuid": "a4f8d22f-2ffe-5dd6-ba69-319f8d005b0d", 00:19:39.928 "is_configured": true, 00:19:39.928 "data_offset": 2048, 00:19:39.928 "data_size": 63488 00:19:39.928 }, 00:19:39.928 { 00:19:39.928 "name": "BaseBdev2", 00:19:39.928 "uuid": "f976ef14-c7bf-5b9a-b71a-7bb24d8c6a1b", 00:19:39.928 "is_configured": true, 00:19:39.928 "data_offset": 2048, 00:19:39.928 "data_size": 63488 00:19:39.928 }, 00:19:39.928 { 00:19:39.928 "name": "BaseBdev3", 00:19:39.928 "uuid": "5e468bfe-d78c-5fc4-81f6-ae5ad965486b", 00:19:39.928 "is_configured": true, 00:19:39.928 "data_offset": 2048, 00:19:39.928 "data_size": 63488 00:19:39.928 }, 00:19:39.928 { 00:19:39.928 "name": "BaseBdev4", 00:19:39.928 "uuid": "2e3e5b9e-d994-5ff4-9a68-188c2978040c", 00:19:39.928 "is_configured": true, 00:19:39.928 "data_offset": 2048, 00:19:39.928 "data_size": 63488 00:19:39.929 } 00:19:39.929 ] 00:19:39.929 }' 00:19:39.929 12:18:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:39.929 [2024-11-25 12:18:35.920442] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:39.929 12:18:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:39.929 12:18:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:39.929 12:18:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:39.929 12:18:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:39.929 12:18:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:39.929 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:39.929 12:18:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:19:39.929 12:18:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:39.929 12:18:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:19:39.929 12:18:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:19:39.929 12:18:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.929 12:18:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:39.929 [2024-11-25 12:18:35.988542] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:40.498 119.00 IOPS, 357.00 MiB/s [2024-11-25T12:18:36.589Z] [2024-11-25 12:18:36.372524] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:19:40.498 [2024-11-25 12:18:36.372596] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:19:40.498 12:18:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.498 12:18:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:19:40.498 12:18:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:19:40.498 12:18:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:40.498 12:18:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:40.498 12:18:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:40.498 12:18:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:40.498 12:18:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:40.498 12:18:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.498 12:18:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.498 12:18:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:40.498 12:18:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:40.498 12:18:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.498 12:18:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:40.498 "name": "raid_bdev1", 00:19:40.498 "uuid": "63f3166c-6656-4832-a315-04b955c46ce3", 00:19:40.498 "strip_size_kb": 0, 00:19:40.498 "state": "online", 00:19:40.498 "raid_level": "raid1", 00:19:40.498 "superblock": true, 00:19:40.498 "num_base_bdevs": 4, 00:19:40.498 "num_base_bdevs_discovered": 3, 00:19:40.498 "num_base_bdevs_operational": 3, 00:19:40.498 "process": { 00:19:40.498 "type": "rebuild", 00:19:40.498 "target": "spare", 00:19:40.498 "progress": { 00:19:40.498 "blocks": 18432, 00:19:40.498 "percent": 29 00:19:40.498 } 00:19:40.498 }, 00:19:40.498 "base_bdevs_list": [ 00:19:40.498 { 00:19:40.498 "name": "spare", 00:19:40.498 "uuid": "a4f8d22f-2ffe-5dd6-ba69-319f8d005b0d", 00:19:40.498 "is_configured": true, 00:19:40.498 "data_offset": 2048, 00:19:40.498 "data_size": 63488 00:19:40.498 }, 00:19:40.498 { 00:19:40.498 "name": null, 00:19:40.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:40.498 "is_configured": false, 00:19:40.498 "data_offset": 0, 00:19:40.498 "data_size": 63488 00:19:40.498 }, 00:19:40.498 { 00:19:40.498 "name": "BaseBdev3", 00:19:40.498 "uuid": "5e468bfe-d78c-5fc4-81f6-ae5ad965486b", 00:19:40.498 "is_configured": true, 00:19:40.498 "data_offset": 2048, 00:19:40.498 "data_size": 63488 00:19:40.498 }, 00:19:40.498 { 00:19:40.498 "name": "BaseBdev4", 00:19:40.498 "uuid": "2e3e5b9e-d994-5ff4-9a68-188c2978040c", 00:19:40.498 "is_configured": true, 00:19:40.498 "data_offset": 2048, 00:19:40.498 "data_size": 63488 00:19:40.498 } 00:19:40.498 ] 00:19:40.498 }' 00:19:40.498 12:18:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:40.498 12:18:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:40.498 12:18:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:40.498 [2024-11-25 12:18:36.523174] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:19:40.498 12:18:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:40.498 12:18:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=535 00:19:40.498 12:18:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:40.498 12:18:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:40.498 12:18:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:40.498 12:18:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:40.498 12:18:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:40.498 12:18:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:40.498 12:18:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.498 12:18:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.498 12:18:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:40.498 12:18:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:40.498 12:18:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.758 12:18:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:40.758 "name": "raid_bdev1", 00:19:40.758 "uuid": "63f3166c-6656-4832-a315-04b955c46ce3", 00:19:40.758 "strip_size_kb": 0, 00:19:40.758 "state": "online", 00:19:40.758 "raid_level": "raid1", 00:19:40.758 "superblock": true, 00:19:40.758 "num_base_bdevs": 4, 00:19:40.758 "num_base_bdevs_discovered": 3, 00:19:40.758 "num_base_bdevs_operational": 3, 00:19:40.758 "process": { 00:19:40.758 "type": "rebuild", 00:19:40.758 "target": "spare", 00:19:40.758 "progress": { 00:19:40.758 "blocks": 20480, 00:19:40.758 "percent": 32 00:19:40.758 } 00:19:40.758 }, 00:19:40.758 "base_bdevs_list": [ 00:19:40.758 { 00:19:40.758 "name": "spare", 00:19:40.758 "uuid": "a4f8d22f-2ffe-5dd6-ba69-319f8d005b0d", 00:19:40.758 "is_configured": true, 00:19:40.758 "data_offset": 2048, 00:19:40.758 "data_size": 63488 00:19:40.758 }, 00:19:40.758 { 00:19:40.758 "name": null, 00:19:40.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:40.758 "is_configured": false, 00:19:40.758 "data_offset": 0, 00:19:40.758 "data_size": 63488 00:19:40.758 }, 00:19:40.758 { 00:19:40.758 "name": "BaseBdev3", 00:19:40.758 "uuid": "5e468bfe-d78c-5fc4-81f6-ae5ad965486b", 00:19:40.758 "is_configured": true, 00:19:40.758 "data_offset": 2048, 00:19:40.758 "data_size": 63488 00:19:40.758 }, 00:19:40.758 { 00:19:40.758 "name": "BaseBdev4", 00:19:40.758 "uuid": "2e3e5b9e-d994-5ff4-9a68-188c2978040c", 00:19:40.758 "is_configured": true, 00:19:40.758 "data_offset": 2048, 00:19:40.758 "data_size": 63488 00:19:40.758 } 00:19:40.758 ] 00:19:40.758 }' 00:19:40.758 12:18:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:40.758 12:18:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:40.758 12:18:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:40.758 12:18:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:40.758 12:18:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:41.326 [2024-11-25 12:18:37.278422] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:19:41.584 106.80 IOPS, 320.40 MiB/s [2024-11-25T12:18:37.675Z] [2024-11-25 12:18:37.482428] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:19:41.843 12:18:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:41.843 12:18:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:41.843 12:18:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:41.843 12:18:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:41.843 12:18:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:41.843 12:18:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:41.843 12:18:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.843 12:18:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.843 12:18:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:41.843 12:18:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:41.843 12:18:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.843 12:18:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:41.843 "name": "raid_bdev1", 00:19:41.843 "uuid": "63f3166c-6656-4832-a315-04b955c46ce3", 00:19:41.843 "strip_size_kb": 0, 00:19:41.843 "state": "online", 00:19:41.843 "raid_level": "raid1", 00:19:41.843 "superblock": true, 00:19:41.843 "num_base_bdevs": 4, 00:19:41.843 "num_base_bdevs_discovered": 3, 00:19:41.843 "num_base_bdevs_operational": 3, 00:19:41.843 "process": { 00:19:41.843 "type": "rebuild", 00:19:41.843 "target": "spare", 00:19:41.843 "progress": { 00:19:41.843 "blocks": 36864, 00:19:41.843 "percent": 58 00:19:41.843 } 00:19:41.843 }, 00:19:41.843 "base_bdevs_list": [ 00:19:41.843 { 00:19:41.843 "name": "spare", 00:19:41.843 "uuid": "a4f8d22f-2ffe-5dd6-ba69-319f8d005b0d", 00:19:41.843 "is_configured": true, 00:19:41.843 "data_offset": 2048, 00:19:41.843 "data_size": 63488 00:19:41.843 }, 00:19:41.843 { 00:19:41.843 "name": null, 00:19:41.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:41.844 "is_configured": false, 00:19:41.844 "data_offset": 0, 00:19:41.844 "data_size": 63488 00:19:41.844 }, 00:19:41.844 { 00:19:41.844 "name": "BaseBdev3", 00:19:41.844 "uuid": "5e468bfe-d78c-5fc4-81f6-ae5ad965486b", 00:19:41.844 "is_configured": true, 00:19:41.844 "data_offset": 2048, 00:19:41.844 "data_size": 63488 00:19:41.844 }, 00:19:41.844 { 00:19:41.844 "name": "BaseBdev4", 00:19:41.844 "uuid": "2e3e5b9e-d994-5ff4-9a68-188c2978040c", 00:19:41.844 "is_configured": true, 00:19:41.844 "data_offset": 2048, 00:19:41.844 "data_size": 63488 00:19:41.844 } 00:19:41.844 ] 00:19:41.844 }' 00:19:41.844 12:18:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:41.844 12:18:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:41.844 12:18:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:41.844 12:18:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:41.844 12:18:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:42.463 96.00 IOPS, 288.00 MiB/s [2024-11-25T12:18:38.554Z] [2024-11-25 12:18:38.333104] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:19:43.031 [2024-11-25 12:18:38.817707] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:19:43.031 12:18:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:43.031 12:18:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:43.031 12:18:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:43.031 12:18:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:43.031 12:18:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:43.031 12:18:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:43.031 12:18:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.031 12:18:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.031 12:18:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:43.031 12:18:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:43.031 12:18:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.031 12:18:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:43.031 "name": "raid_bdev1", 00:19:43.031 "uuid": "63f3166c-6656-4832-a315-04b955c46ce3", 00:19:43.031 "strip_size_kb": 0, 00:19:43.031 "state": "online", 00:19:43.031 "raid_level": "raid1", 00:19:43.031 "superblock": true, 00:19:43.031 "num_base_bdevs": 4, 00:19:43.031 "num_base_bdevs_discovered": 3, 00:19:43.031 "num_base_bdevs_operational": 3, 00:19:43.031 "process": { 00:19:43.031 "type": "rebuild", 00:19:43.031 "target": "spare", 00:19:43.031 "progress": { 00:19:43.031 "blocks": 53248, 00:19:43.031 "percent": 83 00:19:43.031 } 00:19:43.031 }, 00:19:43.031 "base_bdevs_list": [ 00:19:43.031 { 00:19:43.031 "name": "spare", 00:19:43.031 "uuid": "a4f8d22f-2ffe-5dd6-ba69-319f8d005b0d", 00:19:43.031 "is_configured": true, 00:19:43.031 "data_offset": 2048, 00:19:43.031 "data_size": 63488 00:19:43.031 }, 00:19:43.031 { 00:19:43.032 "name": null, 00:19:43.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:43.032 "is_configured": false, 00:19:43.032 "data_offset": 0, 00:19:43.032 "data_size": 63488 00:19:43.032 }, 00:19:43.032 { 00:19:43.032 "name": "BaseBdev3", 00:19:43.032 "uuid": "5e468bfe-d78c-5fc4-81f6-ae5ad965486b", 00:19:43.032 "is_configured": true, 00:19:43.032 "data_offset": 2048, 00:19:43.032 "data_size": 63488 00:19:43.032 }, 00:19:43.032 { 00:19:43.032 "name": "BaseBdev4", 00:19:43.032 "uuid": "2e3e5b9e-d994-5ff4-9a68-188c2978040c", 00:19:43.032 "is_configured": true, 00:19:43.032 "data_offset": 2048, 00:19:43.032 "data_size": 63488 00:19:43.032 } 00:19:43.032 ] 00:19:43.032 }' 00:19:43.032 12:18:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:43.032 12:18:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:43.032 12:18:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:43.032 12:18:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:43.032 12:18:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:43.032 [2024-11-25 12:18:39.074192] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:19:43.547 89.43 IOPS, 268.29 MiB/s [2024-11-25T12:18:39.638Z] [2024-11-25 12:18:39.417139] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:43.547 [2024-11-25 12:18:39.524769] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:43.547 [2024-11-25 12:18:39.528101] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:44.116 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:44.116 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:44.116 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:44.116 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:44.116 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:44.116 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:44.116 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:44.116 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:44.116 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.116 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:44.116 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.116 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:44.116 "name": "raid_bdev1", 00:19:44.116 "uuid": "63f3166c-6656-4832-a315-04b955c46ce3", 00:19:44.116 "strip_size_kb": 0, 00:19:44.116 "state": "online", 00:19:44.116 "raid_level": "raid1", 00:19:44.116 "superblock": true, 00:19:44.116 "num_base_bdevs": 4, 00:19:44.116 "num_base_bdevs_discovered": 3, 00:19:44.116 "num_base_bdevs_operational": 3, 00:19:44.116 "base_bdevs_list": [ 00:19:44.116 { 00:19:44.116 "name": "spare", 00:19:44.116 "uuid": "a4f8d22f-2ffe-5dd6-ba69-319f8d005b0d", 00:19:44.116 "is_configured": true, 00:19:44.116 "data_offset": 2048, 00:19:44.116 "data_size": 63488 00:19:44.116 }, 00:19:44.116 { 00:19:44.116 "name": null, 00:19:44.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:44.116 "is_configured": false, 00:19:44.116 "data_offset": 0, 00:19:44.116 "data_size": 63488 00:19:44.116 }, 00:19:44.116 { 00:19:44.116 "name": "BaseBdev3", 00:19:44.116 "uuid": "5e468bfe-d78c-5fc4-81f6-ae5ad965486b", 00:19:44.116 "is_configured": true, 00:19:44.116 "data_offset": 2048, 00:19:44.116 "data_size": 63488 00:19:44.116 }, 00:19:44.116 { 00:19:44.116 "name": "BaseBdev4", 00:19:44.116 "uuid": "2e3e5b9e-d994-5ff4-9a68-188c2978040c", 00:19:44.116 "is_configured": true, 00:19:44.116 "data_offset": 2048, 00:19:44.116 "data_size": 63488 00:19:44.116 } 00:19:44.116 ] 00:19:44.116 }' 00:19:44.116 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:44.116 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:44.116 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:44.116 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:44.116 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:19:44.116 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:44.116 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:44.116 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:44.116 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:44.116 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:44.116 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:44.116 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:44.116 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.116 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:44.376 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.376 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:44.376 "name": "raid_bdev1", 00:19:44.376 "uuid": "63f3166c-6656-4832-a315-04b955c46ce3", 00:19:44.376 "strip_size_kb": 0, 00:19:44.376 "state": "online", 00:19:44.376 "raid_level": "raid1", 00:19:44.376 "superblock": true, 00:19:44.376 "num_base_bdevs": 4, 00:19:44.376 "num_base_bdevs_discovered": 3, 00:19:44.376 "num_base_bdevs_operational": 3, 00:19:44.376 "base_bdevs_list": [ 00:19:44.376 { 00:19:44.376 "name": "spare", 00:19:44.376 "uuid": "a4f8d22f-2ffe-5dd6-ba69-319f8d005b0d", 00:19:44.376 "is_configured": true, 00:19:44.376 "data_offset": 2048, 00:19:44.376 "data_size": 63488 00:19:44.376 }, 00:19:44.376 { 00:19:44.376 "name": null, 00:19:44.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:44.376 "is_configured": false, 00:19:44.376 "data_offset": 0, 00:19:44.376 "data_size": 63488 00:19:44.376 }, 00:19:44.376 { 00:19:44.376 "name": "BaseBdev3", 00:19:44.376 "uuid": "5e468bfe-d78c-5fc4-81f6-ae5ad965486b", 00:19:44.376 "is_configured": true, 00:19:44.376 "data_offset": 2048, 00:19:44.376 "data_size": 63488 00:19:44.376 }, 00:19:44.376 { 00:19:44.376 "name": "BaseBdev4", 00:19:44.377 "uuid": "2e3e5b9e-d994-5ff4-9a68-188c2978040c", 00:19:44.377 "is_configured": true, 00:19:44.377 "data_offset": 2048, 00:19:44.377 "data_size": 63488 00:19:44.377 } 00:19:44.377 ] 00:19:44.377 }' 00:19:44.377 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:44.377 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:44.377 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:44.377 83.12 IOPS, 249.38 MiB/s [2024-11-25T12:18:40.468Z] 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:44.377 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:44.377 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:44.377 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:44.377 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:44.377 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:44.377 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:44.377 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:44.377 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:44.377 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:44.377 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:44.377 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:44.377 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.377 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:44.377 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:44.377 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.377 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:44.377 "name": "raid_bdev1", 00:19:44.377 "uuid": "63f3166c-6656-4832-a315-04b955c46ce3", 00:19:44.377 "strip_size_kb": 0, 00:19:44.377 "state": "online", 00:19:44.377 "raid_level": "raid1", 00:19:44.377 "superblock": true, 00:19:44.377 "num_base_bdevs": 4, 00:19:44.377 "num_base_bdevs_discovered": 3, 00:19:44.377 "num_base_bdevs_operational": 3, 00:19:44.377 "base_bdevs_list": [ 00:19:44.377 { 00:19:44.377 "name": "spare", 00:19:44.377 "uuid": "a4f8d22f-2ffe-5dd6-ba69-319f8d005b0d", 00:19:44.377 "is_configured": true, 00:19:44.377 "data_offset": 2048, 00:19:44.377 "data_size": 63488 00:19:44.377 }, 00:19:44.377 { 00:19:44.377 "name": null, 00:19:44.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:44.377 "is_configured": false, 00:19:44.377 "data_offset": 0, 00:19:44.377 "data_size": 63488 00:19:44.377 }, 00:19:44.377 { 00:19:44.377 "name": "BaseBdev3", 00:19:44.377 "uuid": "5e468bfe-d78c-5fc4-81f6-ae5ad965486b", 00:19:44.377 "is_configured": true, 00:19:44.377 "data_offset": 2048, 00:19:44.377 "data_size": 63488 00:19:44.377 }, 00:19:44.377 { 00:19:44.377 "name": "BaseBdev4", 00:19:44.377 "uuid": "2e3e5b9e-d994-5ff4-9a68-188c2978040c", 00:19:44.377 "is_configured": true, 00:19:44.377 "data_offset": 2048, 00:19:44.377 "data_size": 63488 00:19:44.377 } 00:19:44.377 ] 00:19:44.377 }' 00:19:44.377 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:44.377 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:44.945 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:44.945 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.945 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:44.945 [2024-11-25 12:18:40.833648] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:44.946 [2024-11-25 12:18:40.833826] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:44.946 00:19:44.946 Latency(us) 00:19:44.946 [2024-11-25T12:18:41.037Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:44.946 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:19:44.946 raid_bdev1 : 8.59 79.70 239.10 0.00 0.00 17051.71 290.44 113436.86 00:19:44.946 [2024-11-25T12:18:41.037Z] =================================================================================================================== 00:19:44.946 [2024-11-25T12:18:41.037Z] Total : 79.70 239.10 0.00 0.00 17051.71 290.44 113436.86 00:19:44.946 [2024-11-25 12:18:40.893661] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:44.946 [2024-11-25 12:18:40.893892] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:44.946 [2024-11-25 12:18:40.894094] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:44.946 [2024-11-25 12:18:40.894263] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:44.946 { 00:19:44.946 "results": [ 00:19:44.946 { 00:19:44.946 "job": "raid_bdev1", 00:19:44.946 "core_mask": "0x1", 00:19:44.946 "workload": "randrw", 00:19:44.946 "percentage": 50, 00:19:44.946 "status": "finished", 00:19:44.946 "queue_depth": 2, 00:19:44.946 "io_size": 3145728, 00:19:44.946 "runtime": 8.594867, 00:19:44.946 "iops": 79.69873181283666, 00:19:44.946 "mibps": 239.09619543850997, 00:19:44.946 "io_failed": 0, 00:19:44.946 "io_timeout": 0, 00:19:44.946 "avg_latency_us": 17051.70915461181, 00:19:44.946 "min_latency_us": 290.44363636363636, 00:19:44.946 "max_latency_us": 113436.85818181818 00:19:44.946 } 00:19:44.946 ], 00:19:44.946 "core_count": 1 00:19:44.946 } 00:19:44.946 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.946 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:19:44.946 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:44.946 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.946 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:44.946 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.946 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:44.946 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:44.946 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:19:44.946 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:19:44.946 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:44.946 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:19:44.946 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:44.946 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:44.946 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:44.946 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:19:44.946 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:44.946 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:44.946 12:18:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:19:45.205 /dev/nbd0 00:19:45.205 12:18:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:45.205 12:18:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:45.205 12:18:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:45.205 12:18:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:19:45.205 12:18:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:45.205 12:18:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:45.205 12:18:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:45.205 12:18:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:19:45.205 12:18:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:45.205 12:18:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:45.205 12:18:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:45.531 1+0 records in 00:19:45.531 1+0 records out 00:19:45.531 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00036559 s, 11.2 MB/s 00:19:45.531 12:18:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:45.531 12:18:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:19:45.531 12:18:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:45.531 12:18:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:45.531 12:18:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:19:45.531 12:18:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:45.531 12:18:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:45.531 12:18:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:19:45.531 12:18:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:19:45.531 12:18:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:19:45.531 12:18:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:19:45.531 12:18:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:19:45.531 12:18:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:19:45.531 12:18:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:45.531 12:18:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:19:45.531 12:18:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:45.531 12:18:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:19:45.531 12:18:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:45.531 12:18:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:19:45.531 12:18:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:45.531 12:18:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:45.531 12:18:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:19:45.531 /dev/nbd1 00:19:45.839 12:18:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:45.839 12:18:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:45.839 12:18:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:45.839 12:18:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:19:45.839 12:18:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:45.839 12:18:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:45.839 12:18:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:45.839 12:18:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:19:45.839 12:18:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:45.839 12:18:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:45.839 12:18:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:45.839 1+0 records in 00:19:45.839 1+0 records out 00:19:45.839 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000413213 s, 9.9 MB/s 00:19:45.839 12:18:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:45.839 12:18:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:19:45.839 12:18:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:45.839 12:18:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:45.839 12:18:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:19:45.839 12:18:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:45.839 12:18:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:45.839 12:18:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:45.839 12:18:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:19:45.839 12:18:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:45.839 12:18:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:19:45.839 12:18:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:45.839 12:18:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:19:45.839 12:18:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:45.839 12:18:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:46.098 12:18:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:46.099 12:18:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:46.099 12:18:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:46.099 12:18:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:46.099 12:18:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:46.099 12:18:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:46.099 12:18:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:19:46.099 12:18:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:19:46.099 12:18:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:19:46.099 12:18:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:19:46.099 12:18:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:19:46.099 12:18:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:46.099 12:18:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:19:46.099 12:18:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:46.099 12:18:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:19:46.099 12:18:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:46.099 12:18:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:19:46.099 12:18:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:46.099 12:18:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:46.099 12:18:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:19:46.357 /dev/nbd1 00:19:46.357 12:18:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:46.357 12:18:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:46.357 12:18:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:46.357 12:18:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:19:46.357 12:18:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:46.357 12:18:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:46.357 12:18:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:46.357 12:18:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:19:46.357 12:18:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:46.357 12:18:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:46.357 12:18:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:46.616 1+0 records in 00:19:46.616 1+0 records out 00:19:46.616 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000428123 s, 9.6 MB/s 00:19:46.616 12:18:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:46.616 12:18:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:19:46.616 12:18:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:46.616 12:18:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:46.616 12:18:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:19:46.616 12:18:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:46.616 12:18:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:46.616 12:18:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:46.616 12:18:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:19:46.616 12:18:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:46.616 12:18:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:19:46.616 12:18:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:46.616 12:18:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:19:46.616 12:18:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:46.616 12:18:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:46.875 12:18:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:46.875 12:18:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:46.875 12:18:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:46.875 12:18:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:46.875 12:18:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:46.875 12:18:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:46.875 12:18:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:19:46.875 12:18:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:19:46.875 12:18:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:46.875 12:18:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:46.875 12:18:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:46.875 12:18:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:46.875 12:18:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:19:46.875 12:18:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:46.875 12:18:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:47.135 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:47.135 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:47.135 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:47.135 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:47.135 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:47.135 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:47.135 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:19:47.135 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:19:47.135 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:47.135 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:47.135 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.135 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:47.135 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.135 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:47.135 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.135 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:47.135 [2024-11-25 12:18:43.190484] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:47.135 [2024-11-25 12:18:43.190555] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:47.135 [2024-11-25 12:18:43.190585] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:19:47.135 [2024-11-25 12:18:43.190603] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:47.135 [2024-11-25 12:18:43.193520] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:47.135 [2024-11-25 12:18:43.193570] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:47.135 [2024-11-25 12:18:43.193683] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:47.135 [2024-11-25 12:18:43.193766] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:47.135 [2024-11-25 12:18:43.193938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:47.135 [2024-11-25 12:18:43.194081] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:47.135 spare 00:19:47.135 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.135 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:47.135 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.135 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:47.394 [2024-11-25 12:18:43.294230] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:47.394 [2024-11-25 12:18:43.294300] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:47.394 [2024-11-25 12:18:43.294804] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:19:47.394 [2024-11-25 12:18:43.295070] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:47.394 [2024-11-25 12:18:43.295086] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:47.394 [2024-11-25 12:18:43.295377] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:47.394 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.394 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:47.394 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:47.394 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:47.394 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:47.394 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:47.394 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:47.394 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:47.394 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:47.394 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:47.394 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:47.394 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.394 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.394 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:47.394 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.394 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.394 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:47.394 "name": "raid_bdev1", 00:19:47.394 "uuid": "63f3166c-6656-4832-a315-04b955c46ce3", 00:19:47.394 "strip_size_kb": 0, 00:19:47.394 "state": "online", 00:19:47.394 "raid_level": "raid1", 00:19:47.394 "superblock": true, 00:19:47.394 "num_base_bdevs": 4, 00:19:47.394 "num_base_bdevs_discovered": 3, 00:19:47.394 "num_base_bdevs_operational": 3, 00:19:47.394 "base_bdevs_list": [ 00:19:47.394 { 00:19:47.394 "name": "spare", 00:19:47.394 "uuid": "a4f8d22f-2ffe-5dd6-ba69-319f8d005b0d", 00:19:47.394 "is_configured": true, 00:19:47.394 "data_offset": 2048, 00:19:47.394 "data_size": 63488 00:19:47.394 }, 00:19:47.394 { 00:19:47.394 "name": null, 00:19:47.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.394 "is_configured": false, 00:19:47.394 "data_offset": 2048, 00:19:47.394 "data_size": 63488 00:19:47.394 }, 00:19:47.394 { 00:19:47.394 "name": "BaseBdev3", 00:19:47.394 "uuid": "5e468bfe-d78c-5fc4-81f6-ae5ad965486b", 00:19:47.394 "is_configured": true, 00:19:47.394 "data_offset": 2048, 00:19:47.394 "data_size": 63488 00:19:47.394 }, 00:19:47.394 { 00:19:47.394 "name": "BaseBdev4", 00:19:47.394 "uuid": "2e3e5b9e-d994-5ff4-9a68-188c2978040c", 00:19:47.394 "is_configured": true, 00:19:47.394 "data_offset": 2048, 00:19:47.394 "data_size": 63488 00:19:47.394 } 00:19:47.394 ] 00:19:47.394 }' 00:19:47.394 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:47.394 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:47.963 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:47.963 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:47.963 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:47.963 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:47.963 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:47.963 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.963 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.963 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.963 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:47.963 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.963 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:47.963 "name": "raid_bdev1", 00:19:47.963 "uuid": "63f3166c-6656-4832-a315-04b955c46ce3", 00:19:47.963 "strip_size_kb": 0, 00:19:47.963 "state": "online", 00:19:47.963 "raid_level": "raid1", 00:19:47.963 "superblock": true, 00:19:47.963 "num_base_bdevs": 4, 00:19:47.963 "num_base_bdevs_discovered": 3, 00:19:47.963 "num_base_bdevs_operational": 3, 00:19:47.963 "base_bdevs_list": [ 00:19:47.963 { 00:19:47.963 "name": "spare", 00:19:47.963 "uuid": "a4f8d22f-2ffe-5dd6-ba69-319f8d005b0d", 00:19:47.963 "is_configured": true, 00:19:47.963 "data_offset": 2048, 00:19:47.963 "data_size": 63488 00:19:47.963 }, 00:19:47.963 { 00:19:47.963 "name": null, 00:19:47.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.963 "is_configured": false, 00:19:47.963 "data_offset": 2048, 00:19:47.963 "data_size": 63488 00:19:47.963 }, 00:19:47.963 { 00:19:47.963 "name": "BaseBdev3", 00:19:47.963 "uuid": "5e468bfe-d78c-5fc4-81f6-ae5ad965486b", 00:19:47.963 "is_configured": true, 00:19:47.963 "data_offset": 2048, 00:19:47.963 "data_size": 63488 00:19:47.963 }, 00:19:47.963 { 00:19:47.963 "name": "BaseBdev4", 00:19:47.963 "uuid": "2e3e5b9e-d994-5ff4-9a68-188c2978040c", 00:19:47.963 "is_configured": true, 00:19:47.963 "data_offset": 2048, 00:19:47.963 "data_size": 63488 00:19:47.963 } 00:19:47.963 ] 00:19:47.963 }' 00:19:47.963 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:47.963 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:47.963 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:47.963 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:47.963 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.963 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.963 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:47.963 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:47.963 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.963 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:47.963 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:47.963 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.963 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:47.963 [2024-11-25 12:18:43.984024] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:47.963 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.963 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:47.963 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:47.963 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:47.963 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:47.963 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:47.963 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:47.963 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:47.963 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:47.963 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:47.963 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:47.963 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.963 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.963 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.963 12:18:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:47.963 12:18:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.963 12:18:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:47.963 "name": "raid_bdev1", 00:19:47.963 "uuid": "63f3166c-6656-4832-a315-04b955c46ce3", 00:19:47.963 "strip_size_kb": 0, 00:19:47.963 "state": "online", 00:19:47.963 "raid_level": "raid1", 00:19:47.963 "superblock": true, 00:19:47.963 "num_base_bdevs": 4, 00:19:47.963 "num_base_bdevs_discovered": 2, 00:19:47.963 "num_base_bdevs_operational": 2, 00:19:47.963 "base_bdevs_list": [ 00:19:47.963 { 00:19:47.963 "name": null, 00:19:47.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.963 "is_configured": false, 00:19:47.963 "data_offset": 0, 00:19:47.963 "data_size": 63488 00:19:47.963 }, 00:19:47.963 { 00:19:47.963 "name": null, 00:19:47.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.963 "is_configured": false, 00:19:47.963 "data_offset": 2048, 00:19:47.963 "data_size": 63488 00:19:47.963 }, 00:19:47.963 { 00:19:47.963 "name": "BaseBdev3", 00:19:47.963 "uuid": "5e468bfe-d78c-5fc4-81f6-ae5ad965486b", 00:19:47.963 "is_configured": true, 00:19:47.963 "data_offset": 2048, 00:19:47.963 "data_size": 63488 00:19:47.963 }, 00:19:47.963 { 00:19:47.963 "name": "BaseBdev4", 00:19:47.963 "uuid": "2e3e5b9e-d994-5ff4-9a68-188c2978040c", 00:19:47.963 "is_configured": true, 00:19:47.963 "data_offset": 2048, 00:19:47.963 "data_size": 63488 00:19:47.963 } 00:19:47.963 ] 00:19:47.963 }' 00:19:47.963 12:18:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:47.963 12:18:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:48.532 12:18:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:48.532 12:18:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.532 12:18:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:48.532 [2024-11-25 12:18:44.472238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:48.532 [2024-11-25 12:18:44.472507] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:19:48.532 [2024-11-25 12:18:44.472535] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:48.532 [2024-11-25 12:18:44.472583] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:48.532 [2024-11-25 12:18:44.486193] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:19:48.532 12:18:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.532 12:18:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:48.532 [2024-11-25 12:18:44.488712] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:49.496 12:18:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:49.496 12:18:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:49.496 12:18:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:49.496 12:18:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:49.496 12:18:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:49.496 12:18:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.496 12:18:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.496 12:18:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:49.496 12:18:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:49.496 12:18:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.496 12:18:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:49.496 "name": "raid_bdev1", 00:19:49.496 "uuid": "63f3166c-6656-4832-a315-04b955c46ce3", 00:19:49.496 "strip_size_kb": 0, 00:19:49.496 "state": "online", 00:19:49.496 "raid_level": "raid1", 00:19:49.496 "superblock": true, 00:19:49.496 "num_base_bdevs": 4, 00:19:49.496 "num_base_bdevs_discovered": 3, 00:19:49.496 "num_base_bdevs_operational": 3, 00:19:49.496 "process": { 00:19:49.496 "type": "rebuild", 00:19:49.496 "target": "spare", 00:19:49.496 "progress": { 00:19:49.496 "blocks": 20480, 00:19:49.496 "percent": 32 00:19:49.496 } 00:19:49.496 }, 00:19:49.496 "base_bdevs_list": [ 00:19:49.496 { 00:19:49.496 "name": "spare", 00:19:49.496 "uuid": "a4f8d22f-2ffe-5dd6-ba69-319f8d005b0d", 00:19:49.496 "is_configured": true, 00:19:49.496 "data_offset": 2048, 00:19:49.496 "data_size": 63488 00:19:49.496 }, 00:19:49.496 { 00:19:49.496 "name": null, 00:19:49.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:49.496 "is_configured": false, 00:19:49.496 "data_offset": 2048, 00:19:49.496 "data_size": 63488 00:19:49.496 }, 00:19:49.496 { 00:19:49.496 "name": "BaseBdev3", 00:19:49.496 "uuid": "5e468bfe-d78c-5fc4-81f6-ae5ad965486b", 00:19:49.496 "is_configured": true, 00:19:49.497 "data_offset": 2048, 00:19:49.497 "data_size": 63488 00:19:49.497 }, 00:19:49.497 { 00:19:49.497 "name": "BaseBdev4", 00:19:49.497 "uuid": "2e3e5b9e-d994-5ff4-9a68-188c2978040c", 00:19:49.497 "is_configured": true, 00:19:49.497 "data_offset": 2048, 00:19:49.497 "data_size": 63488 00:19:49.497 } 00:19:49.497 ] 00:19:49.497 }' 00:19:49.497 12:18:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:49.755 12:18:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:49.755 12:18:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:49.755 12:18:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:49.755 12:18:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:49.755 12:18:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.755 12:18:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:49.755 [2024-11-25 12:18:45.642424] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:49.755 [2024-11-25 12:18:45.697642] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:49.755 [2024-11-25 12:18:45.697960] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:49.755 [2024-11-25 12:18:45.697991] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:49.755 [2024-11-25 12:18:45.698008] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:49.755 12:18:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.755 12:18:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:49.756 12:18:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:49.756 12:18:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:49.756 12:18:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:49.756 12:18:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:49.756 12:18:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:49.756 12:18:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:49.756 12:18:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:49.756 12:18:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:49.756 12:18:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:49.756 12:18:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:49.756 12:18:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.756 12:18:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.756 12:18:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:49.756 12:18:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.756 12:18:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:49.756 "name": "raid_bdev1", 00:19:49.756 "uuid": "63f3166c-6656-4832-a315-04b955c46ce3", 00:19:49.756 "strip_size_kb": 0, 00:19:49.756 "state": "online", 00:19:49.756 "raid_level": "raid1", 00:19:49.756 "superblock": true, 00:19:49.756 "num_base_bdevs": 4, 00:19:49.756 "num_base_bdevs_discovered": 2, 00:19:49.756 "num_base_bdevs_operational": 2, 00:19:49.756 "base_bdevs_list": [ 00:19:49.756 { 00:19:49.756 "name": null, 00:19:49.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:49.756 "is_configured": false, 00:19:49.756 "data_offset": 0, 00:19:49.756 "data_size": 63488 00:19:49.756 }, 00:19:49.756 { 00:19:49.756 "name": null, 00:19:49.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:49.756 "is_configured": false, 00:19:49.756 "data_offset": 2048, 00:19:49.756 "data_size": 63488 00:19:49.756 }, 00:19:49.756 { 00:19:49.756 "name": "BaseBdev3", 00:19:49.756 "uuid": "5e468bfe-d78c-5fc4-81f6-ae5ad965486b", 00:19:49.756 "is_configured": true, 00:19:49.756 "data_offset": 2048, 00:19:49.756 "data_size": 63488 00:19:49.756 }, 00:19:49.756 { 00:19:49.756 "name": "BaseBdev4", 00:19:49.756 "uuid": "2e3e5b9e-d994-5ff4-9a68-188c2978040c", 00:19:49.756 "is_configured": true, 00:19:49.756 "data_offset": 2048, 00:19:49.756 "data_size": 63488 00:19:49.756 } 00:19:49.756 ] 00:19:49.756 }' 00:19:49.756 12:18:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:49.756 12:18:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:50.323 12:18:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:50.323 12:18:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.323 12:18:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:50.323 [2024-11-25 12:18:46.193120] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:50.323 [2024-11-25 12:18:46.193359] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:50.323 [2024-11-25 12:18:46.193443] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:19:50.323 [2024-11-25 12:18:46.193666] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:50.323 [2024-11-25 12:18:46.194365] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:50.323 [2024-11-25 12:18:46.194538] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:50.323 [2024-11-25 12:18:46.194684] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:50.323 [2024-11-25 12:18:46.194709] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:19:50.323 [2024-11-25 12:18:46.194723] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:50.323 [2024-11-25 12:18:46.194769] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:50.323 [2024-11-25 12:18:46.208635] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:19:50.323 spare 00:19:50.323 12:18:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.323 12:18:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:50.323 [2024-11-25 12:18:46.211242] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:51.260 12:18:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:51.260 12:18:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:51.260 12:18:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:51.260 12:18:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:51.260 12:18:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:51.260 12:18:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.260 12:18:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.260 12:18:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:51.260 12:18:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:51.260 12:18:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.260 12:18:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:51.260 "name": "raid_bdev1", 00:19:51.260 "uuid": "63f3166c-6656-4832-a315-04b955c46ce3", 00:19:51.260 "strip_size_kb": 0, 00:19:51.260 "state": "online", 00:19:51.260 "raid_level": "raid1", 00:19:51.260 "superblock": true, 00:19:51.260 "num_base_bdevs": 4, 00:19:51.260 "num_base_bdevs_discovered": 3, 00:19:51.260 "num_base_bdevs_operational": 3, 00:19:51.260 "process": { 00:19:51.260 "type": "rebuild", 00:19:51.260 "target": "spare", 00:19:51.260 "progress": { 00:19:51.260 "blocks": 20480, 00:19:51.260 "percent": 32 00:19:51.260 } 00:19:51.260 }, 00:19:51.260 "base_bdevs_list": [ 00:19:51.260 { 00:19:51.260 "name": "spare", 00:19:51.260 "uuid": "a4f8d22f-2ffe-5dd6-ba69-319f8d005b0d", 00:19:51.260 "is_configured": true, 00:19:51.260 "data_offset": 2048, 00:19:51.260 "data_size": 63488 00:19:51.260 }, 00:19:51.260 { 00:19:51.260 "name": null, 00:19:51.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:51.260 "is_configured": false, 00:19:51.260 "data_offset": 2048, 00:19:51.260 "data_size": 63488 00:19:51.260 }, 00:19:51.260 { 00:19:51.260 "name": "BaseBdev3", 00:19:51.260 "uuid": "5e468bfe-d78c-5fc4-81f6-ae5ad965486b", 00:19:51.260 "is_configured": true, 00:19:51.260 "data_offset": 2048, 00:19:51.260 "data_size": 63488 00:19:51.260 }, 00:19:51.260 { 00:19:51.260 "name": "BaseBdev4", 00:19:51.260 "uuid": "2e3e5b9e-d994-5ff4-9a68-188c2978040c", 00:19:51.260 "is_configured": true, 00:19:51.260 "data_offset": 2048, 00:19:51.260 "data_size": 63488 00:19:51.260 } 00:19:51.260 ] 00:19:51.260 }' 00:19:51.260 12:18:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:51.260 12:18:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:51.260 12:18:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:51.518 12:18:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:51.518 12:18:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:51.518 12:18:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.518 12:18:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:51.518 [2024-11-25 12:18:47.376840] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:51.518 [2024-11-25 12:18:47.420043] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:51.518 [2024-11-25 12:18:47.420261] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:51.518 [2024-11-25 12:18:47.420307] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:51.518 [2024-11-25 12:18:47.420321] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:51.518 12:18:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.518 12:18:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:51.518 12:18:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:51.518 12:18:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:51.518 12:18:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:51.518 12:18:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:51.518 12:18:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:51.518 12:18:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:51.519 12:18:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:51.519 12:18:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:51.519 12:18:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:51.519 12:18:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.519 12:18:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:51.519 12:18:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.519 12:18:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:51.519 12:18:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.519 12:18:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:51.519 "name": "raid_bdev1", 00:19:51.519 "uuid": "63f3166c-6656-4832-a315-04b955c46ce3", 00:19:51.519 "strip_size_kb": 0, 00:19:51.519 "state": "online", 00:19:51.519 "raid_level": "raid1", 00:19:51.519 "superblock": true, 00:19:51.519 "num_base_bdevs": 4, 00:19:51.519 "num_base_bdevs_discovered": 2, 00:19:51.519 "num_base_bdevs_operational": 2, 00:19:51.519 "base_bdevs_list": [ 00:19:51.519 { 00:19:51.519 "name": null, 00:19:51.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:51.519 "is_configured": false, 00:19:51.519 "data_offset": 0, 00:19:51.519 "data_size": 63488 00:19:51.519 }, 00:19:51.519 { 00:19:51.519 "name": null, 00:19:51.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:51.519 "is_configured": false, 00:19:51.519 "data_offset": 2048, 00:19:51.519 "data_size": 63488 00:19:51.519 }, 00:19:51.519 { 00:19:51.519 "name": "BaseBdev3", 00:19:51.519 "uuid": "5e468bfe-d78c-5fc4-81f6-ae5ad965486b", 00:19:51.519 "is_configured": true, 00:19:51.519 "data_offset": 2048, 00:19:51.519 "data_size": 63488 00:19:51.519 }, 00:19:51.519 { 00:19:51.519 "name": "BaseBdev4", 00:19:51.519 "uuid": "2e3e5b9e-d994-5ff4-9a68-188c2978040c", 00:19:51.519 "is_configured": true, 00:19:51.519 "data_offset": 2048, 00:19:51.519 "data_size": 63488 00:19:51.519 } 00:19:51.519 ] 00:19:51.519 }' 00:19:51.519 12:18:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:51.519 12:18:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:52.086 12:18:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:52.086 12:18:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:52.086 12:18:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:52.086 12:18:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:52.086 12:18:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:52.086 12:18:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.086 12:18:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:52.086 12:18:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.086 12:18:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:52.086 12:18:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.086 12:18:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:52.086 "name": "raid_bdev1", 00:19:52.086 "uuid": "63f3166c-6656-4832-a315-04b955c46ce3", 00:19:52.086 "strip_size_kb": 0, 00:19:52.086 "state": "online", 00:19:52.086 "raid_level": "raid1", 00:19:52.086 "superblock": true, 00:19:52.086 "num_base_bdevs": 4, 00:19:52.086 "num_base_bdevs_discovered": 2, 00:19:52.086 "num_base_bdevs_operational": 2, 00:19:52.086 "base_bdevs_list": [ 00:19:52.086 { 00:19:52.086 "name": null, 00:19:52.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:52.087 "is_configured": false, 00:19:52.087 "data_offset": 0, 00:19:52.087 "data_size": 63488 00:19:52.087 }, 00:19:52.087 { 00:19:52.087 "name": null, 00:19:52.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:52.087 "is_configured": false, 00:19:52.087 "data_offset": 2048, 00:19:52.087 "data_size": 63488 00:19:52.087 }, 00:19:52.087 { 00:19:52.087 "name": "BaseBdev3", 00:19:52.087 "uuid": "5e468bfe-d78c-5fc4-81f6-ae5ad965486b", 00:19:52.087 "is_configured": true, 00:19:52.087 "data_offset": 2048, 00:19:52.087 "data_size": 63488 00:19:52.087 }, 00:19:52.087 { 00:19:52.087 "name": "BaseBdev4", 00:19:52.087 "uuid": "2e3e5b9e-d994-5ff4-9a68-188c2978040c", 00:19:52.087 "is_configured": true, 00:19:52.087 "data_offset": 2048, 00:19:52.087 "data_size": 63488 00:19:52.087 } 00:19:52.087 ] 00:19:52.087 }' 00:19:52.087 12:18:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:52.087 12:18:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:52.087 12:18:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:52.087 12:18:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:52.087 12:18:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:52.087 12:18:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.087 12:18:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:52.087 12:18:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.087 12:18:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:52.087 12:18:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.087 12:18:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:52.087 [2024-11-25 12:18:48.131060] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:52.087 [2024-11-25 12:18:48.131264] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:52.087 [2024-11-25 12:18:48.131326] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:19:52.087 [2024-11-25 12:18:48.131355] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:52.087 [2024-11-25 12:18:48.131927] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:52.087 [2024-11-25 12:18:48.131959] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:52.087 [2024-11-25 12:18:48.132065] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:52.087 [2024-11-25 12:18:48.132099] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:19:52.087 [2024-11-25 12:18:48.132113] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:52.087 [2024-11-25 12:18:48.132126] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:52.087 BaseBdev1 00:19:52.087 12:18:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.087 12:18:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:53.083 12:18:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:53.083 12:18:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:53.083 12:18:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:53.083 12:18:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:53.083 12:18:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:53.083 12:18:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:53.083 12:18:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:53.083 12:18:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:53.083 12:18:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:53.083 12:18:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:53.083 12:18:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.083 12:18:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.083 12:18:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.083 12:18:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:53.083 12:18:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.340 12:18:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:53.340 "name": "raid_bdev1", 00:19:53.341 "uuid": "63f3166c-6656-4832-a315-04b955c46ce3", 00:19:53.341 "strip_size_kb": 0, 00:19:53.341 "state": "online", 00:19:53.341 "raid_level": "raid1", 00:19:53.341 "superblock": true, 00:19:53.341 "num_base_bdevs": 4, 00:19:53.341 "num_base_bdevs_discovered": 2, 00:19:53.341 "num_base_bdevs_operational": 2, 00:19:53.341 "base_bdevs_list": [ 00:19:53.341 { 00:19:53.341 "name": null, 00:19:53.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.341 "is_configured": false, 00:19:53.341 "data_offset": 0, 00:19:53.341 "data_size": 63488 00:19:53.341 }, 00:19:53.341 { 00:19:53.341 "name": null, 00:19:53.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.341 "is_configured": false, 00:19:53.341 "data_offset": 2048, 00:19:53.341 "data_size": 63488 00:19:53.341 }, 00:19:53.341 { 00:19:53.341 "name": "BaseBdev3", 00:19:53.341 "uuid": "5e468bfe-d78c-5fc4-81f6-ae5ad965486b", 00:19:53.341 "is_configured": true, 00:19:53.341 "data_offset": 2048, 00:19:53.341 "data_size": 63488 00:19:53.341 }, 00:19:53.341 { 00:19:53.341 "name": "BaseBdev4", 00:19:53.341 "uuid": "2e3e5b9e-d994-5ff4-9a68-188c2978040c", 00:19:53.341 "is_configured": true, 00:19:53.341 "data_offset": 2048, 00:19:53.341 "data_size": 63488 00:19:53.341 } 00:19:53.341 ] 00:19:53.341 }' 00:19:53.341 12:18:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:53.341 12:18:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:53.908 12:18:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:53.908 12:18:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:53.908 12:18:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:53.908 12:18:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:53.908 12:18:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:53.908 12:18:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.908 12:18:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.908 12:18:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:53.908 12:18:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.908 12:18:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.908 12:18:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:53.908 "name": "raid_bdev1", 00:19:53.908 "uuid": "63f3166c-6656-4832-a315-04b955c46ce3", 00:19:53.908 "strip_size_kb": 0, 00:19:53.908 "state": "online", 00:19:53.908 "raid_level": "raid1", 00:19:53.908 "superblock": true, 00:19:53.908 "num_base_bdevs": 4, 00:19:53.908 "num_base_bdevs_discovered": 2, 00:19:53.908 "num_base_bdevs_operational": 2, 00:19:53.908 "base_bdevs_list": [ 00:19:53.908 { 00:19:53.908 "name": null, 00:19:53.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.908 "is_configured": false, 00:19:53.908 "data_offset": 0, 00:19:53.908 "data_size": 63488 00:19:53.908 }, 00:19:53.908 { 00:19:53.908 "name": null, 00:19:53.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.908 "is_configured": false, 00:19:53.908 "data_offset": 2048, 00:19:53.908 "data_size": 63488 00:19:53.908 }, 00:19:53.908 { 00:19:53.908 "name": "BaseBdev3", 00:19:53.908 "uuid": "5e468bfe-d78c-5fc4-81f6-ae5ad965486b", 00:19:53.908 "is_configured": true, 00:19:53.908 "data_offset": 2048, 00:19:53.908 "data_size": 63488 00:19:53.908 }, 00:19:53.908 { 00:19:53.908 "name": "BaseBdev4", 00:19:53.908 "uuid": "2e3e5b9e-d994-5ff4-9a68-188c2978040c", 00:19:53.908 "is_configured": true, 00:19:53.908 "data_offset": 2048, 00:19:53.908 "data_size": 63488 00:19:53.908 } 00:19:53.908 ] 00:19:53.908 }' 00:19:53.908 12:18:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:53.908 12:18:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:53.908 12:18:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:53.908 12:18:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:53.908 12:18:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:53.908 12:18:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:19:53.908 12:18:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:53.908 12:18:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:53.908 12:18:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:53.908 12:18:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:53.908 12:18:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:53.908 12:18:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:53.908 12:18:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.908 12:18:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:53.908 [2024-11-25 12:18:49.867901] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:53.908 [2024-11-25 12:18:49.868106] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:19:53.908 [2024-11-25 12:18:49.868131] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:53.908 request: 00:19:53.908 { 00:19:53.908 "base_bdev": "BaseBdev1", 00:19:53.908 "raid_bdev": "raid_bdev1", 00:19:53.908 "method": "bdev_raid_add_base_bdev", 00:19:53.908 "req_id": 1 00:19:53.908 } 00:19:53.908 Got JSON-RPC error response 00:19:53.908 response: 00:19:53.908 { 00:19:53.908 "code": -22, 00:19:53.908 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:53.908 } 00:19:53.908 12:18:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:53.908 12:18:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:19:53.908 12:18:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:53.908 12:18:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:53.908 12:18:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:53.908 12:18:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:54.842 12:18:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:54.842 12:18:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:54.842 12:18:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:54.842 12:18:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:54.842 12:18:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:54.842 12:18:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:54.842 12:18:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:54.842 12:18:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:54.842 12:18:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:54.842 12:18:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:54.842 12:18:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:54.842 12:18:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:54.842 12:18:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.842 12:18:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:54.842 12:18:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.100 12:18:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:55.100 "name": "raid_bdev1", 00:19:55.100 "uuid": "63f3166c-6656-4832-a315-04b955c46ce3", 00:19:55.100 "strip_size_kb": 0, 00:19:55.100 "state": "online", 00:19:55.100 "raid_level": "raid1", 00:19:55.100 "superblock": true, 00:19:55.100 "num_base_bdevs": 4, 00:19:55.100 "num_base_bdevs_discovered": 2, 00:19:55.100 "num_base_bdevs_operational": 2, 00:19:55.100 "base_bdevs_list": [ 00:19:55.100 { 00:19:55.100 "name": null, 00:19:55.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:55.100 "is_configured": false, 00:19:55.100 "data_offset": 0, 00:19:55.100 "data_size": 63488 00:19:55.100 }, 00:19:55.100 { 00:19:55.100 "name": null, 00:19:55.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:55.100 "is_configured": false, 00:19:55.100 "data_offset": 2048, 00:19:55.100 "data_size": 63488 00:19:55.100 }, 00:19:55.100 { 00:19:55.100 "name": "BaseBdev3", 00:19:55.100 "uuid": "5e468bfe-d78c-5fc4-81f6-ae5ad965486b", 00:19:55.100 "is_configured": true, 00:19:55.100 "data_offset": 2048, 00:19:55.100 "data_size": 63488 00:19:55.100 }, 00:19:55.100 { 00:19:55.100 "name": "BaseBdev4", 00:19:55.100 "uuid": "2e3e5b9e-d994-5ff4-9a68-188c2978040c", 00:19:55.100 "is_configured": true, 00:19:55.100 "data_offset": 2048, 00:19:55.100 "data_size": 63488 00:19:55.100 } 00:19:55.100 ] 00:19:55.100 }' 00:19:55.100 12:18:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:55.100 12:18:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:55.359 12:18:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:55.359 12:18:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:55.359 12:18:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:55.359 12:18:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:55.359 12:18:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:55.359 12:18:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:55.359 12:18:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:55.359 12:18:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.359 12:18:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:55.359 12:18:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.359 12:18:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:55.359 "name": "raid_bdev1", 00:19:55.359 "uuid": "63f3166c-6656-4832-a315-04b955c46ce3", 00:19:55.359 "strip_size_kb": 0, 00:19:55.359 "state": "online", 00:19:55.359 "raid_level": "raid1", 00:19:55.359 "superblock": true, 00:19:55.359 "num_base_bdevs": 4, 00:19:55.359 "num_base_bdevs_discovered": 2, 00:19:55.359 "num_base_bdevs_operational": 2, 00:19:55.359 "base_bdevs_list": [ 00:19:55.359 { 00:19:55.359 "name": null, 00:19:55.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:55.360 "is_configured": false, 00:19:55.360 "data_offset": 0, 00:19:55.360 "data_size": 63488 00:19:55.360 }, 00:19:55.360 { 00:19:55.360 "name": null, 00:19:55.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:55.360 "is_configured": false, 00:19:55.360 "data_offset": 2048, 00:19:55.360 "data_size": 63488 00:19:55.360 }, 00:19:55.360 { 00:19:55.360 "name": "BaseBdev3", 00:19:55.360 "uuid": "5e468bfe-d78c-5fc4-81f6-ae5ad965486b", 00:19:55.360 "is_configured": true, 00:19:55.360 "data_offset": 2048, 00:19:55.360 "data_size": 63488 00:19:55.360 }, 00:19:55.360 { 00:19:55.360 "name": "BaseBdev4", 00:19:55.360 "uuid": "2e3e5b9e-d994-5ff4-9a68-188c2978040c", 00:19:55.360 "is_configured": true, 00:19:55.360 "data_offset": 2048, 00:19:55.360 "data_size": 63488 00:19:55.360 } 00:19:55.360 ] 00:19:55.360 }' 00:19:55.360 12:18:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:55.618 12:18:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:55.618 12:18:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:55.618 12:18:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:55.618 12:18:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79492 00:19:55.619 12:18:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 79492 ']' 00:19:55.619 12:18:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 79492 00:19:55.619 12:18:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:19:55.619 12:18:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:55.619 12:18:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79492 00:19:55.619 killing process with pid 79492 00:19:55.619 Received shutdown signal, test time was about 19.254176 seconds 00:19:55.619 00:19:55.619 Latency(us) 00:19:55.619 [2024-11-25T12:18:51.710Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:55.619 [2024-11-25T12:18:51.710Z] =================================================================================================================== 00:19:55.619 [2024-11-25T12:18:51.710Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:55.619 12:18:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:55.619 12:18:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:55.619 12:18:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79492' 00:19:55.619 12:18:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 79492 00:19:55.619 [2024-11-25 12:18:51.532889] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:55.619 12:18:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 79492 00:19:55.619 [2024-11-25 12:18:51.533047] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:55.619 [2024-11-25 12:18:51.533140] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:55.619 [2024-11-25 12:18:51.533161] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:55.877 [2024-11-25 12:18:51.909707] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:57.252 ************************************ 00:19:57.252 END TEST raid_rebuild_test_sb_io 00:19:57.252 ************************************ 00:19:57.252 12:18:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:19:57.252 00:19:57.252 real 0m22.794s 00:19:57.252 user 0m30.890s 00:19:57.252 sys 0m2.311s 00:19:57.252 12:18:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:57.252 12:18:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:57.252 12:18:53 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:19:57.252 12:18:53 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:19:57.252 12:18:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:57.252 12:18:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:57.252 12:18:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:57.252 ************************************ 00:19:57.252 START TEST raid5f_state_function_test 00:19:57.252 ************************************ 00:19:57.252 12:18:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:19:57.252 12:18:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:19:57.252 12:18:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:19:57.252 12:18:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:19:57.252 12:18:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:57.252 12:18:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:57.252 12:18:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:57.253 12:18:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:57.253 12:18:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:57.253 12:18:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:57.253 12:18:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:57.253 12:18:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:57.253 12:18:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:57.253 12:18:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:19:57.253 12:18:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:57.253 12:18:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:57.253 12:18:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:57.253 12:18:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:57.253 Process raid pid: 80225 00:19:57.253 12:18:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:57.253 12:18:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:57.253 12:18:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:57.253 12:18:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:57.253 12:18:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:19:57.253 12:18:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:19:57.253 12:18:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:19:57.253 12:18:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:19:57.253 12:18:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:19:57.253 12:18:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80225 00:19:57.253 12:18:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80225' 00:19:57.253 12:18:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:57.253 12:18:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80225 00:19:57.253 12:18:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 80225 ']' 00:19:57.253 12:18:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:57.253 12:18:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:57.253 12:18:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:57.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:57.253 12:18:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:57.253 12:18:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.253 [2024-11-25 12:18:53.167919] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:19:57.253 [2024-11-25 12:18:53.168264] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:57.511 [2024-11-25 12:18:53.344883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.511 [2024-11-25 12:18:53.475540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:57.769 [2024-11-25 12:18:53.680945] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:57.769 [2024-11-25 12:18:53.681186] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:58.336 12:18:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:58.336 12:18:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:19:58.336 12:18:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:19:58.336 12:18:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.336 12:18:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.336 [2024-11-25 12:18:54.153277] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:58.336 [2024-11-25 12:18:54.153505] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:58.336 [2024-11-25 12:18:54.153651] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:58.336 [2024-11-25 12:18:54.153716] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:58.336 [2024-11-25 12:18:54.153897] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:58.336 [2024-11-25 12:18:54.153968] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:58.336 12:18:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.336 12:18:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:58.336 12:18:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:58.336 12:18:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:58.336 12:18:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:58.336 12:18:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:58.336 12:18:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:58.336 12:18:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:58.336 12:18:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:58.336 12:18:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:58.336 12:18:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:58.336 12:18:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.336 12:18:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.336 12:18:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:58.336 12:18:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.336 12:18:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.336 12:18:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:58.336 "name": "Existed_Raid", 00:19:58.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.336 "strip_size_kb": 64, 00:19:58.336 "state": "configuring", 00:19:58.336 "raid_level": "raid5f", 00:19:58.336 "superblock": false, 00:19:58.336 "num_base_bdevs": 3, 00:19:58.336 "num_base_bdevs_discovered": 0, 00:19:58.336 "num_base_bdevs_operational": 3, 00:19:58.336 "base_bdevs_list": [ 00:19:58.336 { 00:19:58.336 "name": "BaseBdev1", 00:19:58.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.336 "is_configured": false, 00:19:58.336 "data_offset": 0, 00:19:58.336 "data_size": 0 00:19:58.336 }, 00:19:58.336 { 00:19:58.336 "name": "BaseBdev2", 00:19:58.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.336 "is_configured": false, 00:19:58.336 "data_offset": 0, 00:19:58.336 "data_size": 0 00:19:58.336 }, 00:19:58.336 { 00:19:58.336 "name": "BaseBdev3", 00:19:58.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.336 "is_configured": false, 00:19:58.336 "data_offset": 0, 00:19:58.336 "data_size": 0 00:19:58.336 } 00:19:58.336 ] 00:19:58.336 }' 00:19:58.336 12:18:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:58.336 12:18:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.594 12:18:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:58.594 12:18:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.594 12:18:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.594 [2024-11-25 12:18:54.645353] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:58.594 [2024-11-25 12:18:54.645420] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:58.594 12:18:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.594 12:18:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:19:58.594 12:18:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.594 12:18:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.594 [2024-11-25 12:18:54.657338] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:58.594 [2024-11-25 12:18:54.657519] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:58.594 [2024-11-25 12:18:54.657639] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:58.594 [2024-11-25 12:18:54.657699] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:58.594 [2024-11-25 12:18:54.657815] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:58.594 [2024-11-25 12:18:54.657874] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:58.594 12:18:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.594 12:18:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:58.594 12:18:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.594 12:18:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.852 [2024-11-25 12:18:54.705990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:58.852 BaseBdev1 00:19:58.852 12:18:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.852 12:18:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:58.852 12:18:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:58.852 12:18:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:58.852 12:18:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:58.852 12:18:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:58.852 12:18:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:58.852 12:18:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:58.852 12:18:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.852 12:18:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.852 12:18:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.852 12:18:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:58.852 12:18:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.852 12:18:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.852 [ 00:19:58.852 { 00:19:58.852 "name": "BaseBdev1", 00:19:58.852 "aliases": [ 00:19:58.852 "378103f3-d2ea-4592-a471-626b7482cd0e" 00:19:58.852 ], 00:19:58.852 "product_name": "Malloc disk", 00:19:58.852 "block_size": 512, 00:19:58.852 "num_blocks": 65536, 00:19:58.852 "uuid": "378103f3-d2ea-4592-a471-626b7482cd0e", 00:19:58.852 "assigned_rate_limits": { 00:19:58.852 "rw_ios_per_sec": 0, 00:19:58.852 "rw_mbytes_per_sec": 0, 00:19:58.852 "r_mbytes_per_sec": 0, 00:19:58.852 "w_mbytes_per_sec": 0 00:19:58.852 }, 00:19:58.852 "claimed": true, 00:19:58.852 "claim_type": "exclusive_write", 00:19:58.852 "zoned": false, 00:19:58.852 "supported_io_types": { 00:19:58.852 "read": true, 00:19:58.852 "write": true, 00:19:58.852 "unmap": true, 00:19:58.852 "flush": true, 00:19:58.852 "reset": true, 00:19:58.852 "nvme_admin": false, 00:19:58.852 "nvme_io": false, 00:19:58.852 "nvme_io_md": false, 00:19:58.852 "write_zeroes": true, 00:19:58.852 "zcopy": true, 00:19:58.852 "get_zone_info": false, 00:19:58.852 "zone_management": false, 00:19:58.852 "zone_append": false, 00:19:58.852 "compare": false, 00:19:58.852 "compare_and_write": false, 00:19:58.852 "abort": true, 00:19:58.852 "seek_hole": false, 00:19:58.852 "seek_data": false, 00:19:58.852 "copy": true, 00:19:58.852 "nvme_iov_md": false 00:19:58.852 }, 00:19:58.852 "memory_domains": [ 00:19:58.852 { 00:19:58.852 "dma_device_id": "system", 00:19:58.852 "dma_device_type": 1 00:19:58.852 }, 00:19:58.852 { 00:19:58.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:58.852 "dma_device_type": 2 00:19:58.852 } 00:19:58.853 ], 00:19:58.853 "driver_specific": {} 00:19:58.853 } 00:19:58.853 ] 00:19:58.853 12:18:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.853 12:18:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:58.853 12:18:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:58.853 12:18:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:58.853 12:18:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:58.853 12:18:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:58.853 12:18:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:58.853 12:18:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:58.853 12:18:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:58.853 12:18:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:58.853 12:18:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:58.853 12:18:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:58.853 12:18:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:58.853 12:18:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.853 12:18:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.853 12:18:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.853 12:18:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.853 12:18:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:58.853 "name": "Existed_Raid", 00:19:58.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.853 "strip_size_kb": 64, 00:19:58.853 "state": "configuring", 00:19:58.853 "raid_level": "raid5f", 00:19:58.853 "superblock": false, 00:19:58.853 "num_base_bdevs": 3, 00:19:58.853 "num_base_bdevs_discovered": 1, 00:19:58.853 "num_base_bdevs_operational": 3, 00:19:58.853 "base_bdevs_list": [ 00:19:58.853 { 00:19:58.853 "name": "BaseBdev1", 00:19:58.853 "uuid": "378103f3-d2ea-4592-a471-626b7482cd0e", 00:19:58.853 "is_configured": true, 00:19:58.853 "data_offset": 0, 00:19:58.853 "data_size": 65536 00:19:58.853 }, 00:19:58.853 { 00:19:58.853 "name": "BaseBdev2", 00:19:58.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.853 "is_configured": false, 00:19:58.853 "data_offset": 0, 00:19:58.853 "data_size": 0 00:19:58.853 }, 00:19:58.853 { 00:19:58.853 "name": "BaseBdev3", 00:19:58.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.853 "is_configured": false, 00:19:58.853 "data_offset": 0, 00:19:58.853 "data_size": 0 00:19:58.853 } 00:19:58.853 ] 00:19:58.853 }' 00:19:58.853 12:18:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:58.853 12:18:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.419 12:18:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:59.419 12:18:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.419 12:18:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.419 [2024-11-25 12:18:55.266231] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:59.419 [2024-11-25 12:18:55.266306] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:59.419 12:18:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.419 12:18:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:19:59.419 12:18:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.419 12:18:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.419 [2024-11-25 12:18:55.278271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:59.419 [2024-11-25 12:18:55.280702] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:59.419 [2024-11-25 12:18:55.280754] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:59.419 [2024-11-25 12:18:55.280770] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:59.419 [2024-11-25 12:18:55.280785] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:59.419 12:18:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.419 12:18:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:59.419 12:18:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:59.419 12:18:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:59.419 12:18:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:59.419 12:18:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:59.419 12:18:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:59.419 12:18:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:59.419 12:18:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:59.419 12:18:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:59.419 12:18:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:59.419 12:18:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:59.419 12:18:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:59.419 12:18:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:59.419 12:18:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:59.419 12:18:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.419 12:18:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.419 12:18:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.419 12:18:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:59.419 "name": "Existed_Raid", 00:19:59.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.419 "strip_size_kb": 64, 00:19:59.419 "state": "configuring", 00:19:59.419 "raid_level": "raid5f", 00:19:59.419 "superblock": false, 00:19:59.419 "num_base_bdevs": 3, 00:19:59.419 "num_base_bdevs_discovered": 1, 00:19:59.419 "num_base_bdevs_operational": 3, 00:19:59.419 "base_bdevs_list": [ 00:19:59.419 { 00:19:59.419 "name": "BaseBdev1", 00:19:59.419 "uuid": "378103f3-d2ea-4592-a471-626b7482cd0e", 00:19:59.419 "is_configured": true, 00:19:59.419 "data_offset": 0, 00:19:59.419 "data_size": 65536 00:19:59.419 }, 00:19:59.419 { 00:19:59.419 "name": "BaseBdev2", 00:19:59.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.419 "is_configured": false, 00:19:59.419 "data_offset": 0, 00:19:59.419 "data_size": 0 00:19:59.419 }, 00:19:59.419 { 00:19:59.419 "name": "BaseBdev3", 00:19:59.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.419 "is_configured": false, 00:19:59.419 "data_offset": 0, 00:19:59.419 "data_size": 0 00:19:59.419 } 00:19:59.419 ] 00:19:59.419 }' 00:19:59.419 12:18:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:59.419 12:18:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.989 12:18:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:59.989 12:18:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.989 12:18:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.989 [2024-11-25 12:18:55.825503] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:59.989 BaseBdev2 00:19:59.989 12:18:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.989 12:18:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:59.989 12:18:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:59.989 12:18:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:59.989 12:18:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:59.989 12:18:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:59.989 12:18:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:59.989 12:18:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:59.989 12:18:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.989 12:18:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.989 12:18:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.989 12:18:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:59.989 12:18:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.989 12:18:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.989 [ 00:19:59.989 { 00:19:59.989 "name": "BaseBdev2", 00:19:59.989 "aliases": [ 00:19:59.989 "4df79e01-8335-42e8-a0cf-f4faf688dad8" 00:19:59.989 ], 00:19:59.989 "product_name": "Malloc disk", 00:19:59.989 "block_size": 512, 00:19:59.989 "num_blocks": 65536, 00:19:59.989 "uuid": "4df79e01-8335-42e8-a0cf-f4faf688dad8", 00:19:59.989 "assigned_rate_limits": { 00:19:59.989 "rw_ios_per_sec": 0, 00:19:59.989 "rw_mbytes_per_sec": 0, 00:19:59.989 "r_mbytes_per_sec": 0, 00:19:59.989 "w_mbytes_per_sec": 0 00:19:59.989 }, 00:19:59.989 "claimed": true, 00:19:59.989 "claim_type": "exclusive_write", 00:19:59.989 "zoned": false, 00:19:59.989 "supported_io_types": { 00:19:59.989 "read": true, 00:19:59.989 "write": true, 00:19:59.989 "unmap": true, 00:19:59.989 "flush": true, 00:19:59.989 "reset": true, 00:19:59.989 "nvme_admin": false, 00:19:59.989 "nvme_io": false, 00:19:59.989 "nvme_io_md": false, 00:19:59.989 "write_zeroes": true, 00:19:59.989 "zcopy": true, 00:19:59.989 "get_zone_info": false, 00:19:59.989 "zone_management": false, 00:19:59.989 "zone_append": false, 00:19:59.989 "compare": false, 00:19:59.989 "compare_and_write": false, 00:19:59.989 "abort": true, 00:19:59.989 "seek_hole": false, 00:19:59.989 "seek_data": false, 00:19:59.989 "copy": true, 00:19:59.989 "nvme_iov_md": false 00:19:59.989 }, 00:19:59.989 "memory_domains": [ 00:19:59.989 { 00:19:59.989 "dma_device_id": "system", 00:19:59.989 "dma_device_type": 1 00:19:59.989 }, 00:19:59.989 { 00:19:59.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:59.989 "dma_device_type": 2 00:19:59.989 } 00:19:59.989 ], 00:19:59.989 "driver_specific": {} 00:19:59.989 } 00:19:59.990 ] 00:19:59.990 12:18:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.990 12:18:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:59.990 12:18:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:59.990 12:18:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:59.990 12:18:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:59.990 12:18:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:59.990 12:18:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:59.990 12:18:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:59.990 12:18:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:59.990 12:18:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:59.990 12:18:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:59.990 12:18:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:59.990 12:18:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:59.990 12:18:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:59.990 12:18:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:59.990 12:18:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:59.990 12:18:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.990 12:18:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.990 12:18:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.990 12:18:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:59.990 "name": "Existed_Raid", 00:19:59.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.990 "strip_size_kb": 64, 00:19:59.990 "state": "configuring", 00:19:59.990 "raid_level": "raid5f", 00:19:59.990 "superblock": false, 00:19:59.990 "num_base_bdevs": 3, 00:19:59.990 "num_base_bdevs_discovered": 2, 00:19:59.990 "num_base_bdevs_operational": 3, 00:19:59.990 "base_bdevs_list": [ 00:19:59.990 { 00:19:59.990 "name": "BaseBdev1", 00:19:59.990 "uuid": "378103f3-d2ea-4592-a471-626b7482cd0e", 00:19:59.990 "is_configured": true, 00:19:59.990 "data_offset": 0, 00:19:59.990 "data_size": 65536 00:19:59.990 }, 00:19:59.990 { 00:19:59.990 "name": "BaseBdev2", 00:19:59.990 "uuid": "4df79e01-8335-42e8-a0cf-f4faf688dad8", 00:19:59.990 "is_configured": true, 00:19:59.990 "data_offset": 0, 00:19:59.990 "data_size": 65536 00:19:59.990 }, 00:19:59.990 { 00:19:59.990 "name": "BaseBdev3", 00:19:59.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.990 "is_configured": false, 00:19:59.990 "data_offset": 0, 00:19:59.990 "data_size": 0 00:19:59.990 } 00:19:59.990 ] 00:19:59.990 }' 00:19:59.990 12:18:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:59.990 12:18:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.559 12:18:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:00.559 12:18:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.559 12:18:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.559 [2024-11-25 12:18:56.480004] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:00.559 [2024-11-25 12:18:56.480088] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:00.559 [2024-11-25 12:18:56.480108] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:20:00.559 [2024-11-25 12:18:56.480488] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:00.559 [2024-11-25 12:18:56.485739] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:00.559 [2024-11-25 12:18:56.485906] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:00.559 [2024-11-25 12:18:56.486302] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:00.559 BaseBdev3 00:20:00.559 12:18:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.559 12:18:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:20:00.559 12:18:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:20:00.559 12:18:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:00.559 12:18:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:00.559 12:18:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:00.559 12:18:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:00.559 12:18:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:00.559 12:18:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.559 12:18:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.559 12:18:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.559 12:18:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:00.559 12:18:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.559 12:18:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.559 [ 00:20:00.559 { 00:20:00.559 "name": "BaseBdev3", 00:20:00.559 "aliases": [ 00:20:00.559 "faba3470-4ae1-4269-8562-e088c5d8fb89" 00:20:00.559 ], 00:20:00.559 "product_name": "Malloc disk", 00:20:00.559 "block_size": 512, 00:20:00.559 "num_blocks": 65536, 00:20:00.559 "uuid": "faba3470-4ae1-4269-8562-e088c5d8fb89", 00:20:00.559 "assigned_rate_limits": { 00:20:00.559 "rw_ios_per_sec": 0, 00:20:00.559 "rw_mbytes_per_sec": 0, 00:20:00.559 "r_mbytes_per_sec": 0, 00:20:00.559 "w_mbytes_per_sec": 0 00:20:00.559 }, 00:20:00.559 "claimed": true, 00:20:00.559 "claim_type": "exclusive_write", 00:20:00.559 "zoned": false, 00:20:00.559 "supported_io_types": { 00:20:00.559 "read": true, 00:20:00.559 "write": true, 00:20:00.559 "unmap": true, 00:20:00.559 "flush": true, 00:20:00.559 "reset": true, 00:20:00.559 "nvme_admin": false, 00:20:00.559 "nvme_io": false, 00:20:00.559 "nvme_io_md": false, 00:20:00.559 "write_zeroes": true, 00:20:00.560 "zcopy": true, 00:20:00.560 "get_zone_info": false, 00:20:00.560 "zone_management": false, 00:20:00.560 "zone_append": false, 00:20:00.560 "compare": false, 00:20:00.560 "compare_and_write": false, 00:20:00.560 "abort": true, 00:20:00.560 "seek_hole": false, 00:20:00.560 "seek_data": false, 00:20:00.560 "copy": true, 00:20:00.560 "nvme_iov_md": false 00:20:00.560 }, 00:20:00.560 "memory_domains": [ 00:20:00.560 { 00:20:00.560 "dma_device_id": "system", 00:20:00.560 "dma_device_type": 1 00:20:00.560 }, 00:20:00.560 { 00:20:00.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:00.560 "dma_device_type": 2 00:20:00.560 } 00:20:00.560 ], 00:20:00.560 "driver_specific": {} 00:20:00.560 } 00:20:00.560 ] 00:20:00.560 12:18:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.560 12:18:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:00.560 12:18:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:00.560 12:18:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:00.560 12:18:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:20:00.560 12:18:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:00.560 12:18:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:00.560 12:18:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:00.560 12:18:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:00.560 12:18:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:00.560 12:18:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:00.560 12:18:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:00.560 12:18:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:00.560 12:18:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:00.560 12:18:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.560 12:18:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.560 12:18:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.560 12:18:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:00.560 12:18:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.560 12:18:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:00.560 "name": "Existed_Raid", 00:20:00.560 "uuid": "2b77bbf6-6a6f-4aef-9ef8-2f3cb7955fc8", 00:20:00.560 "strip_size_kb": 64, 00:20:00.560 "state": "online", 00:20:00.560 "raid_level": "raid5f", 00:20:00.560 "superblock": false, 00:20:00.560 "num_base_bdevs": 3, 00:20:00.560 "num_base_bdevs_discovered": 3, 00:20:00.560 "num_base_bdevs_operational": 3, 00:20:00.560 "base_bdevs_list": [ 00:20:00.560 { 00:20:00.560 "name": "BaseBdev1", 00:20:00.560 "uuid": "378103f3-d2ea-4592-a471-626b7482cd0e", 00:20:00.560 "is_configured": true, 00:20:00.560 "data_offset": 0, 00:20:00.560 "data_size": 65536 00:20:00.560 }, 00:20:00.560 { 00:20:00.560 "name": "BaseBdev2", 00:20:00.560 "uuid": "4df79e01-8335-42e8-a0cf-f4faf688dad8", 00:20:00.560 "is_configured": true, 00:20:00.560 "data_offset": 0, 00:20:00.560 "data_size": 65536 00:20:00.560 }, 00:20:00.560 { 00:20:00.560 "name": "BaseBdev3", 00:20:00.560 "uuid": "faba3470-4ae1-4269-8562-e088c5d8fb89", 00:20:00.560 "is_configured": true, 00:20:00.560 "data_offset": 0, 00:20:00.560 "data_size": 65536 00:20:00.560 } 00:20:00.560 ] 00:20:00.560 }' 00:20:00.560 12:18:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:00.560 12:18:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.127 12:18:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:01.127 12:18:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:01.127 12:18:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:01.127 12:18:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:01.127 12:18:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:01.127 12:18:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:01.127 12:18:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:01.127 12:18:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:01.127 12:18:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.127 12:18:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.127 [2024-11-25 12:18:57.076416] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:01.127 12:18:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.127 12:18:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:01.127 "name": "Existed_Raid", 00:20:01.127 "aliases": [ 00:20:01.127 "2b77bbf6-6a6f-4aef-9ef8-2f3cb7955fc8" 00:20:01.127 ], 00:20:01.127 "product_name": "Raid Volume", 00:20:01.127 "block_size": 512, 00:20:01.127 "num_blocks": 131072, 00:20:01.127 "uuid": "2b77bbf6-6a6f-4aef-9ef8-2f3cb7955fc8", 00:20:01.127 "assigned_rate_limits": { 00:20:01.127 "rw_ios_per_sec": 0, 00:20:01.127 "rw_mbytes_per_sec": 0, 00:20:01.127 "r_mbytes_per_sec": 0, 00:20:01.127 "w_mbytes_per_sec": 0 00:20:01.127 }, 00:20:01.127 "claimed": false, 00:20:01.127 "zoned": false, 00:20:01.127 "supported_io_types": { 00:20:01.127 "read": true, 00:20:01.127 "write": true, 00:20:01.127 "unmap": false, 00:20:01.127 "flush": false, 00:20:01.127 "reset": true, 00:20:01.127 "nvme_admin": false, 00:20:01.127 "nvme_io": false, 00:20:01.127 "nvme_io_md": false, 00:20:01.127 "write_zeroes": true, 00:20:01.127 "zcopy": false, 00:20:01.127 "get_zone_info": false, 00:20:01.127 "zone_management": false, 00:20:01.127 "zone_append": false, 00:20:01.127 "compare": false, 00:20:01.127 "compare_and_write": false, 00:20:01.127 "abort": false, 00:20:01.127 "seek_hole": false, 00:20:01.127 "seek_data": false, 00:20:01.127 "copy": false, 00:20:01.127 "nvme_iov_md": false 00:20:01.127 }, 00:20:01.127 "driver_specific": { 00:20:01.127 "raid": { 00:20:01.127 "uuid": "2b77bbf6-6a6f-4aef-9ef8-2f3cb7955fc8", 00:20:01.127 "strip_size_kb": 64, 00:20:01.127 "state": "online", 00:20:01.127 "raid_level": "raid5f", 00:20:01.127 "superblock": false, 00:20:01.127 "num_base_bdevs": 3, 00:20:01.127 "num_base_bdevs_discovered": 3, 00:20:01.127 "num_base_bdevs_operational": 3, 00:20:01.127 "base_bdevs_list": [ 00:20:01.127 { 00:20:01.127 "name": "BaseBdev1", 00:20:01.127 "uuid": "378103f3-d2ea-4592-a471-626b7482cd0e", 00:20:01.127 "is_configured": true, 00:20:01.127 "data_offset": 0, 00:20:01.127 "data_size": 65536 00:20:01.127 }, 00:20:01.127 { 00:20:01.127 "name": "BaseBdev2", 00:20:01.127 "uuid": "4df79e01-8335-42e8-a0cf-f4faf688dad8", 00:20:01.127 "is_configured": true, 00:20:01.127 "data_offset": 0, 00:20:01.127 "data_size": 65536 00:20:01.127 }, 00:20:01.127 { 00:20:01.127 "name": "BaseBdev3", 00:20:01.127 "uuid": "faba3470-4ae1-4269-8562-e088c5d8fb89", 00:20:01.127 "is_configured": true, 00:20:01.127 "data_offset": 0, 00:20:01.127 "data_size": 65536 00:20:01.127 } 00:20:01.127 ] 00:20:01.127 } 00:20:01.127 } 00:20:01.127 }' 00:20:01.127 12:18:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:01.127 12:18:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:01.127 BaseBdev2 00:20:01.127 BaseBdev3' 00:20:01.127 12:18:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:01.386 12:18:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:01.386 12:18:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:01.386 12:18:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:01.386 12:18:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.386 12:18:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:01.386 12:18:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.386 12:18:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.386 12:18:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:01.386 12:18:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:01.386 12:18:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:01.386 12:18:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:01.386 12:18:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.386 12:18:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.386 12:18:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:01.386 12:18:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.386 12:18:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:01.386 12:18:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:01.386 12:18:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:01.386 12:18:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:01.386 12:18:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:01.386 12:18:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.386 12:18:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.386 12:18:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.386 12:18:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:01.386 12:18:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:01.386 12:18:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:01.386 12:18:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.386 12:18:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.386 [2024-11-25 12:18:57.388283] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:01.644 12:18:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.644 12:18:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:01.644 12:18:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:20:01.645 12:18:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:01.645 12:18:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:20:01.645 12:18:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:20:01.645 12:18:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:20:01.645 12:18:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:01.645 12:18:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:01.645 12:18:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:01.645 12:18:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:01.645 12:18:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:01.645 12:18:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:01.645 12:18:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:01.645 12:18:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:01.645 12:18:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:01.645 12:18:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:01.645 12:18:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.645 12:18:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.645 12:18:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:01.645 12:18:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.645 12:18:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:01.645 "name": "Existed_Raid", 00:20:01.645 "uuid": "2b77bbf6-6a6f-4aef-9ef8-2f3cb7955fc8", 00:20:01.645 "strip_size_kb": 64, 00:20:01.645 "state": "online", 00:20:01.645 "raid_level": "raid5f", 00:20:01.645 "superblock": false, 00:20:01.645 "num_base_bdevs": 3, 00:20:01.645 "num_base_bdevs_discovered": 2, 00:20:01.645 "num_base_bdevs_operational": 2, 00:20:01.645 "base_bdevs_list": [ 00:20:01.645 { 00:20:01.645 "name": null, 00:20:01.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:01.645 "is_configured": false, 00:20:01.645 "data_offset": 0, 00:20:01.645 "data_size": 65536 00:20:01.645 }, 00:20:01.645 { 00:20:01.645 "name": "BaseBdev2", 00:20:01.645 "uuid": "4df79e01-8335-42e8-a0cf-f4faf688dad8", 00:20:01.645 "is_configured": true, 00:20:01.645 "data_offset": 0, 00:20:01.645 "data_size": 65536 00:20:01.645 }, 00:20:01.645 { 00:20:01.645 "name": "BaseBdev3", 00:20:01.645 "uuid": "faba3470-4ae1-4269-8562-e088c5d8fb89", 00:20:01.645 "is_configured": true, 00:20:01.645 "data_offset": 0, 00:20:01.645 "data_size": 65536 00:20:01.645 } 00:20:01.645 ] 00:20:01.645 }' 00:20:01.645 12:18:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:01.645 12:18:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.903 12:18:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:01.903 12:18:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:01.903 12:18:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:01.903 12:18:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.903 12:18:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.903 12:18:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:02.162 12:18:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.163 12:18:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:02.163 12:18:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:02.163 12:18:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:02.163 12:18:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.163 12:18:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.163 [2024-11-25 12:18:58.030930] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:02.163 [2024-11-25 12:18:58.031193] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:02.163 [2024-11-25 12:18:58.116064] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:02.163 12:18:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.163 12:18:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:02.163 12:18:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:02.163 12:18:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.163 12:18:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:02.163 12:18:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.163 12:18:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.163 12:18:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.163 12:18:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:02.163 12:18:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:02.163 12:18:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:20:02.163 12:18:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.163 12:18:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.163 [2024-11-25 12:18:58.172122] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:02.163 [2024-11-25 12:18:58.172177] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:02.423 12:18:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.423 12:18:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:02.423 12:18:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:02.423 12:18:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.423 12:18:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:02.423 12:18:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.423 12:18:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.423 12:18:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.423 12:18:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:02.423 12:18:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:02.423 12:18:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:20:02.423 12:18:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:20:02.423 12:18:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:02.423 12:18:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:02.423 12:18:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.423 12:18:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.423 BaseBdev2 00:20:02.423 12:18:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.423 12:18:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:20:02.423 12:18:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:02.423 12:18:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:02.423 12:18:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:02.423 12:18:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:02.423 12:18:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:02.423 12:18:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:02.423 12:18:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.423 12:18:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.423 12:18:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.423 12:18:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:02.423 12:18:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.423 12:18:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.423 [ 00:20:02.423 { 00:20:02.423 "name": "BaseBdev2", 00:20:02.423 "aliases": [ 00:20:02.423 "d7012f97-63c8-43ec-ac2e-ef303828d2bf" 00:20:02.423 ], 00:20:02.423 "product_name": "Malloc disk", 00:20:02.423 "block_size": 512, 00:20:02.423 "num_blocks": 65536, 00:20:02.423 "uuid": "d7012f97-63c8-43ec-ac2e-ef303828d2bf", 00:20:02.423 "assigned_rate_limits": { 00:20:02.423 "rw_ios_per_sec": 0, 00:20:02.423 "rw_mbytes_per_sec": 0, 00:20:02.423 "r_mbytes_per_sec": 0, 00:20:02.423 "w_mbytes_per_sec": 0 00:20:02.423 }, 00:20:02.423 "claimed": false, 00:20:02.423 "zoned": false, 00:20:02.423 "supported_io_types": { 00:20:02.423 "read": true, 00:20:02.423 "write": true, 00:20:02.423 "unmap": true, 00:20:02.423 "flush": true, 00:20:02.423 "reset": true, 00:20:02.423 "nvme_admin": false, 00:20:02.423 "nvme_io": false, 00:20:02.423 "nvme_io_md": false, 00:20:02.423 "write_zeroes": true, 00:20:02.423 "zcopy": true, 00:20:02.423 "get_zone_info": false, 00:20:02.423 "zone_management": false, 00:20:02.423 "zone_append": false, 00:20:02.423 "compare": false, 00:20:02.423 "compare_and_write": false, 00:20:02.423 "abort": true, 00:20:02.423 "seek_hole": false, 00:20:02.423 "seek_data": false, 00:20:02.423 "copy": true, 00:20:02.423 "nvme_iov_md": false 00:20:02.423 }, 00:20:02.423 "memory_domains": [ 00:20:02.423 { 00:20:02.423 "dma_device_id": "system", 00:20:02.423 "dma_device_type": 1 00:20:02.423 }, 00:20:02.423 { 00:20:02.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:02.423 "dma_device_type": 2 00:20:02.423 } 00:20:02.423 ], 00:20:02.423 "driver_specific": {} 00:20:02.423 } 00:20:02.423 ] 00:20:02.423 12:18:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.423 12:18:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:02.423 12:18:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:02.423 12:18:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:02.423 12:18:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:02.423 12:18:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.423 12:18:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.423 BaseBdev3 00:20:02.423 12:18:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.423 12:18:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:20:02.423 12:18:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:20:02.423 12:18:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:02.423 12:18:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:02.423 12:18:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:02.423 12:18:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:02.423 12:18:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:02.423 12:18:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.423 12:18:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.424 12:18:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.424 12:18:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:02.424 12:18:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.424 12:18:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.424 [ 00:20:02.424 { 00:20:02.424 "name": "BaseBdev3", 00:20:02.424 "aliases": [ 00:20:02.424 "8b9a9569-265e-4686-94a8-5365e75e5f1a" 00:20:02.424 ], 00:20:02.424 "product_name": "Malloc disk", 00:20:02.424 "block_size": 512, 00:20:02.424 "num_blocks": 65536, 00:20:02.424 "uuid": "8b9a9569-265e-4686-94a8-5365e75e5f1a", 00:20:02.424 "assigned_rate_limits": { 00:20:02.424 "rw_ios_per_sec": 0, 00:20:02.424 "rw_mbytes_per_sec": 0, 00:20:02.424 "r_mbytes_per_sec": 0, 00:20:02.424 "w_mbytes_per_sec": 0 00:20:02.424 }, 00:20:02.424 "claimed": false, 00:20:02.424 "zoned": false, 00:20:02.424 "supported_io_types": { 00:20:02.424 "read": true, 00:20:02.424 "write": true, 00:20:02.424 "unmap": true, 00:20:02.424 "flush": true, 00:20:02.424 "reset": true, 00:20:02.424 "nvme_admin": false, 00:20:02.424 "nvme_io": false, 00:20:02.424 "nvme_io_md": false, 00:20:02.424 "write_zeroes": true, 00:20:02.424 "zcopy": true, 00:20:02.424 "get_zone_info": false, 00:20:02.424 "zone_management": false, 00:20:02.424 "zone_append": false, 00:20:02.424 "compare": false, 00:20:02.424 "compare_and_write": false, 00:20:02.424 "abort": true, 00:20:02.424 "seek_hole": false, 00:20:02.424 "seek_data": false, 00:20:02.424 "copy": true, 00:20:02.424 "nvme_iov_md": false 00:20:02.424 }, 00:20:02.424 "memory_domains": [ 00:20:02.424 { 00:20:02.424 "dma_device_id": "system", 00:20:02.424 "dma_device_type": 1 00:20:02.424 }, 00:20:02.424 { 00:20:02.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:02.424 "dma_device_type": 2 00:20:02.424 } 00:20:02.424 ], 00:20:02.424 "driver_specific": {} 00:20:02.424 } 00:20:02.424 ] 00:20:02.424 12:18:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.424 12:18:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:02.424 12:18:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:02.424 12:18:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:02.424 12:18:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:02.424 12:18:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.424 12:18:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.424 [2024-11-25 12:18:58.477009] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:02.424 [2024-11-25 12:18:58.477062] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:02.424 [2024-11-25 12:18:58.477095] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:02.424 [2024-11-25 12:18:58.479534] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:02.424 12:18:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.424 12:18:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:02.424 12:18:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:02.424 12:18:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:02.424 12:18:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:02.424 12:18:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:02.424 12:18:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:02.424 12:18:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:02.424 12:18:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:02.424 12:18:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:02.424 12:18:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:02.424 12:18:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.424 12:18:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.424 12:18:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:02.424 12:18:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.424 12:18:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.684 12:18:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:02.684 "name": "Existed_Raid", 00:20:02.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.684 "strip_size_kb": 64, 00:20:02.684 "state": "configuring", 00:20:02.684 "raid_level": "raid5f", 00:20:02.684 "superblock": false, 00:20:02.684 "num_base_bdevs": 3, 00:20:02.684 "num_base_bdevs_discovered": 2, 00:20:02.684 "num_base_bdevs_operational": 3, 00:20:02.684 "base_bdevs_list": [ 00:20:02.684 { 00:20:02.684 "name": "BaseBdev1", 00:20:02.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.684 "is_configured": false, 00:20:02.684 "data_offset": 0, 00:20:02.684 "data_size": 0 00:20:02.684 }, 00:20:02.684 { 00:20:02.684 "name": "BaseBdev2", 00:20:02.684 "uuid": "d7012f97-63c8-43ec-ac2e-ef303828d2bf", 00:20:02.684 "is_configured": true, 00:20:02.684 "data_offset": 0, 00:20:02.684 "data_size": 65536 00:20:02.684 }, 00:20:02.684 { 00:20:02.684 "name": "BaseBdev3", 00:20:02.684 "uuid": "8b9a9569-265e-4686-94a8-5365e75e5f1a", 00:20:02.684 "is_configured": true, 00:20:02.684 "data_offset": 0, 00:20:02.684 "data_size": 65536 00:20:02.684 } 00:20:02.684 ] 00:20:02.684 }' 00:20:02.684 12:18:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:02.684 12:18:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.943 12:18:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:20:02.943 12:18:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.943 12:18:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.943 [2024-11-25 12:18:59.001167] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:02.943 12:18:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.943 12:18:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:02.943 12:18:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:02.943 12:18:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:02.943 12:18:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:02.943 12:18:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:02.943 12:18:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:02.943 12:18:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:02.943 12:18:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:02.943 12:18:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:02.943 12:18:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:02.943 12:18:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.943 12:18:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.943 12:18:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.943 12:18:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:02.943 12:18:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.201 12:18:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:03.201 "name": "Existed_Raid", 00:20:03.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:03.201 "strip_size_kb": 64, 00:20:03.201 "state": "configuring", 00:20:03.201 "raid_level": "raid5f", 00:20:03.201 "superblock": false, 00:20:03.201 "num_base_bdevs": 3, 00:20:03.201 "num_base_bdevs_discovered": 1, 00:20:03.201 "num_base_bdevs_operational": 3, 00:20:03.201 "base_bdevs_list": [ 00:20:03.201 { 00:20:03.201 "name": "BaseBdev1", 00:20:03.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:03.201 "is_configured": false, 00:20:03.201 "data_offset": 0, 00:20:03.201 "data_size": 0 00:20:03.201 }, 00:20:03.201 { 00:20:03.201 "name": null, 00:20:03.201 "uuid": "d7012f97-63c8-43ec-ac2e-ef303828d2bf", 00:20:03.201 "is_configured": false, 00:20:03.201 "data_offset": 0, 00:20:03.201 "data_size": 65536 00:20:03.201 }, 00:20:03.201 { 00:20:03.201 "name": "BaseBdev3", 00:20:03.201 "uuid": "8b9a9569-265e-4686-94a8-5365e75e5f1a", 00:20:03.201 "is_configured": true, 00:20:03.201 "data_offset": 0, 00:20:03.201 "data_size": 65536 00:20:03.201 } 00:20:03.201 ] 00:20:03.201 }' 00:20:03.201 12:18:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:03.201 12:18:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.458 12:18:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:03.458 12:18:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.458 12:18:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.458 12:18:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:03.458 12:18:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.716 12:18:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:20:03.716 12:18:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:03.716 12:18:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.716 12:18:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.716 [2024-11-25 12:18:59.607125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:03.716 BaseBdev1 00:20:03.716 12:18:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.716 12:18:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:20:03.716 12:18:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:03.716 12:18:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:03.716 12:18:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:03.716 12:18:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:03.716 12:18:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:03.717 12:18:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:03.717 12:18:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.717 12:18:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.717 12:18:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.717 12:18:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:03.717 12:18:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.717 12:18:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.717 [ 00:20:03.717 { 00:20:03.717 "name": "BaseBdev1", 00:20:03.717 "aliases": [ 00:20:03.717 "c95c1bb3-8d68-4a15-a45d-7de30ceb605d" 00:20:03.717 ], 00:20:03.717 "product_name": "Malloc disk", 00:20:03.717 "block_size": 512, 00:20:03.717 "num_blocks": 65536, 00:20:03.717 "uuid": "c95c1bb3-8d68-4a15-a45d-7de30ceb605d", 00:20:03.717 "assigned_rate_limits": { 00:20:03.717 "rw_ios_per_sec": 0, 00:20:03.717 "rw_mbytes_per_sec": 0, 00:20:03.717 "r_mbytes_per_sec": 0, 00:20:03.717 "w_mbytes_per_sec": 0 00:20:03.717 }, 00:20:03.717 "claimed": true, 00:20:03.717 "claim_type": "exclusive_write", 00:20:03.717 "zoned": false, 00:20:03.717 "supported_io_types": { 00:20:03.717 "read": true, 00:20:03.717 "write": true, 00:20:03.717 "unmap": true, 00:20:03.717 "flush": true, 00:20:03.717 "reset": true, 00:20:03.717 "nvme_admin": false, 00:20:03.717 "nvme_io": false, 00:20:03.717 "nvme_io_md": false, 00:20:03.717 "write_zeroes": true, 00:20:03.717 "zcopy": true, 00:20:03.717 "get_zone_info": false, 00:20:03.717 "zone_management": false, 00:20:03.717 "zone_append": false, 00:20:03.717 "compare": false, 00:20:03.717 "compare_and_write": false, 00:20:03.717 "abort": true, 00:20:03.717 "seek_hole": false, 00:20:03.717 "seek_data": false, 00:20:03.717 "copy": true, 00:20:03.717 "nvme_iov_md": false 00:20:03.717 }, 00:20:03.717 "memory_domains": [ 00:20:03.717 { 00:20:03.717 "dma_device_id": "system", 00:20:03.717 "dma_device_type": 1 00:20:03.717 }, 00:20:03.717 { 00:20:03.717 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:03.717 "dma_device_type": 2 00:20:03.717 } 00:20:03.717 ], 00:20:03.717 "driver_specific": {} 00:20:03.717 } 00:20:03.717 ] 00:20:03.717 12:18:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.717 12:18:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:03.717 12:18:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:03.717 12:18:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:03.717 12:18:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:03.717 12:18:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:03.717 12:18:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:03.717 12:18:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:03.717 12:18:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:03.717 12:18:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:03.717 12:18:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:03.717 12:18:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:03.717 12:18:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:03.717 12:18:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.717 12:18:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.717 12:18:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:03.717 12:18:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.717 12:18:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:03.717 "name": "Existed_Raid", 00:20:03.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:03.717 "strip_size_kb": 64, 00:20:03.717 "state": "configuring", 00:20:03.717 "raid_level": "raid5f", 00:20:03.717 "superblock": false, 00:20:03.717 "num_base_bdevs": 3, 00:20:03.717 "num_base_bdevs_discovered": 2, 00:20:03.717 "num_base_bdevs_operational": 3, 00:20:03.717 "base_bdevs_list": [ 00:20:03.717 { 00:20:03.717 "name": "BaseBdev1", 00:20:03.717 "uuid": "c95c1bb3-8d68-4a15-a45d-7de30ceb605d", 00:20:03.717 "is_configured": true, 00:20:03.717 "data_offset": 0, 00:20:03.717 "data_size": 65536 00:20:03.717 }, 00:20:03.717 { 00:20:03.717 "name": null, 00:20:03.717 "uuid": "d7012f97-63c8-43ec-ac2e-ef303828d2bf", 00:20:03.717 "is_configured": false, 00:20:03.717 "data_offset": 0, 00:20:03.717 "data_size": 65536 00:20:03.717 }, 00:20:03.717 { 00:20:03.717 "name": "BaseBdev3", 00:20:03.717 "uuid": "8b9a9569-265e-4686-94a8-5365e75e5f1a", 00:20:03.717 "is_configured": true, 00:20:03.717 "data_offset": 0, 00:20:03.717 "data_size": 65536 00:20:03.717 } 00:20:03.717 ] 00:20:03.717 }' 00:20:03.717 12:18:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:03.717 12:18:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.284 12:19:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:04.284 12:19:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.284 12:19:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.284 12:19:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:04.284 12:19:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.284 12:19:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:20:04.284 12:19:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:20:04.284 12:19:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.284 12:19:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.284 [2024-11-25 12:19:00.215368] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:04.284 12:19:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.284 12:19:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:04.284 12:19:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:04.284 12:19:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:04.284 12:19:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:04.284 12:19:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:04.284 12:19:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:04.284 12:19:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:04.284 12:19:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:04.284 12:19:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:04.284 12:19:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:04.284 12:19:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:04.284 12:19:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:04.284 12:19:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.284 12:19:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.284 12:19:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.284 12:19:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:04.284 "name": "Existed_Raid", 00:20:04.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.284 "strip_size_kb": 64, 00:20:04.284 "state": "configuring", 00:20:04.284 "raid_level": "raid5f", 00:20:04.284 "superblock": false, 00:20:04.284 "num_base_bdevs": 3, 00:20:04.284 "num_base_bdevs_discovered": 1, 00:20:04.284 "num_base_bdevs_operational": 3, 00:20:04.284 "base_bdevs_list": [ 00:20:04.284 { 00:20:04.284 "name": "BaseBdev1", 00:20:04.284 "uuid": "c95c1bb3-8d68-4a15-a45d-7de30ceb605d", 00:20:04.284 "is_configured": true, 00:20:04.284 "data_offset": 0, 00:20:04.284 "data_size": 65536 00:20:04.284 }, 00:20:04.284 { 00:20:04.284 "name": null, 00:20:04.284 "uuid": "d7012f97-63c8-43ec-ac2e-ef303828d2bf", 00:20:04.284 "is_configured": false, 00:20:04.284 "data_offset": 0, 00:20:04.284 "data_size": 65536 00:20:04.284 }, 00:20:04.284 { 00:20:04.285 "name": null, 00:20:04.285 "uuid": "8b9a9569-265e-4686-94a8-5365e75e5f1a", 00:20:04.285 "is_configured": false, 00:20:04.285 "data_offset": 0, 00:20:04.285 "data_size": 65536 00:20:04.285 } 00:20:04.285 ] 00:20:04.285 }' 00:20:04.285 12:19:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:04.285 12:19:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.851 12:19:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:04.851 12:19:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:04.851 12:19:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.851 12:19:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.851 12:19:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.851 12:19:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:20:04.851 12:19:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:04.851 12:19:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.851 12:19:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.851 [2024-11-25 12:19:00.775564] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:04.851 12:19:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.851 12:19:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:04.851 12:19:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:04.851 12:19:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:04.851 12:19:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:04.851 12:19:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:04.851 12:19:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:04.851 12:19:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:04.851 12:19:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:04.851 12:19:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:04.851 12:19:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:04.852 12:19:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:04.852 12:19:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:04.852 12:19:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.852 12:19:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.852 12:19:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.852 12:19:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:04.852 "name": "Existed_Raid", 00:20:04.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.852 "strip_size_kb": 64, 00:20:04.852 "state": "configuring", 00:20:04.852 "raid_level": "raid5f", 00:20:04.852 "superblock": false, 00:20:04.852 "num_base_bdevs": 3, 00:20:04.852 "num_base_bdevs_discovered": 2, 00:20:04.852 "num_base_bdevs_operational": 3, 00:20:04.852 "base_bdevs_list": [ 00:20:04.852 { 00:20:04.852 "name": "BaseBdev1", 00:20:04.852 "uuid": "c95c1bb3-8d68-4a15-a45d-7de30ceb605d", 00:20:04.852 "is_configured": true, 00:20:04.852 "data_offset": 0, 00:20:04.852 "data_size": 65536 00:20:04.852 }, 00:20:04.852 { 00:20:04.852 "name": null, 00:20:04.852 "uuid": "d7012f97-63c8-43ec-ac2e-ef303828d2bf", 00:20:04.852 "is_configured": false, 00:20:04.852 "data_offset": 0, 00:20:04.852 "data_size": 65536 00:20:04.852 }, 00:20:04.852 { 00:20:04.852 "name": "BaseBdev3", 00:20:04.852 "uuid": "8b9a9569-265e-4686-94a8-5365e75e5f1a", 00:20:04.852 "is_configured": true, 00:20:04.852 "data_offset": 0, 00:20:04.852 "data_size": 65536 00:20:04.852 } 00:20:04.852 ] 00:20:04.852 }' 00:20:04.852 12:19:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:04.852 12:19:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.418 12:19:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.418 12:19:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.418 12:19:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.418 12:19:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:05.418 12:19:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.418 12:19:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:20:05.418 12:19:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:05.418 12:19:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.418 12:19:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.418 [2024-11-25 12:19:01.352683] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:05.418 12:19:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.418 12:19:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:05.418 12:19:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:05.418 12:19:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:05.418 12:19:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:05.418 12:19:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:05.418 12:19:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:05.418 12:19:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:05.418 12:19:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:05.418 12:19:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:05.418 12:19:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:05.418 12:19:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.418 12:19:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:05.418 12:19:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.418 12:19:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.418 12:19:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.418 12:19:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:05.418 "name": "Existed_Raid", 00:20:05.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:05.418 "strip_size_kb": 64, 00:20:05.418 "state": "configuring", 00:20:05.418 "raid_level": "raid5f", 00:20:05.418 "superblock": false, 00:20:05.418 "num_base_bdevs": 3, 00:20:05.418 "num_base_bdevs_discovered": 1, 00:20:05.418 "num_base_bdevs_operational": 3, 00:20:05.418 "base_bdevs_list": [ 00:20:05.418 { 00:20:05.419 "name": null, 00:20:05.419 "uuid": "c95c1bb3-8d68-4a15-a45d-7de30ceb605d", 00:20:05.419 "is_configured": false, 00:20:05.419 "data_offset": 0, 00:20:05.419 "data_size": 65536 00:20:05.419 }, 00:20:05.419 { 00:20:05.419 "name": null, 00:20:05.419 "uuid": "d7012f97-63c8-43ec-ac2e-ef303828d2bf", 00:20:05.419 "is_configured": false, 00:20:05.419 "data_offset": 0, 00:20:05.419 "data_size": 65536 00:20:05.419 }, 00:20:05.419 { 00:20:05.419 "name": "BaseBdev3", 00:20:05.419 "uuid": "8b9a9569-265e-4686-94a8-5365e75e5f1a", 00:20:05.419 "is_configured": true, 00:20:05.419 "data_offset": 0, 00:20:05.419 "data_size": 65536 00:20:05.419 } 00:20:05.419 ] 00:20:05.419 }' 00:20:05.419 12:19:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:05.419 12:19:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.986 12:19:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.986 12:19:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.986 12:19:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:05.986 12:19:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.986 12:19:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.986 12:19:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:20:05.986 12:19:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:05.986 12:19:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.986 12:19:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.986 [2024-11-25 12:19:01.983927] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:05.986 12:19:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.986 12:19:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:05.986 12:19:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:05.986 12:19:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:05.986 12:19:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:05.986 12:19:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:05.986 12:19:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:05.986 12:19:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:05.986 12:19:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:05.986 12:19:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:05.986 12:19:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:05.986 12:19:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.987 12:19:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.987 12:19:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:05.987 12:19:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.987 12:19:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.987 12:19:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:05.987 "name": "Existed_Raid", 00:20:05.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:05.987 "strip_size_kb": 64, 00:20:05.987 "state": "configuring", 00:20:05.987 "raid_level": "raid5f", 00:20:05.987 "superblock": false, 00:20:05.987 "num_base_bdevs": 3, 00:20:05.987 "num_base_bdevs_discovered": 2, 00:20:05.987 "num_base_bdevs_operational": 3, 00:20:05.987 "base_bdevs_list": [ 00:20:05.987 { 00:20:05.987 "name": null, 00:20:05.987 "uuid": "c95c1bb3-8d68-4a15-a45d-7de30ceb605d", 00:20:05.987 "is_configured": false, 00:20:05.987 "data_offset": 0, 00:20:05.987 "data_size": 65536 00:20:05.987 }, 00:20:05.987 { 00:20:05.987 "name": "BaseBdev2", 00:20:05.987 "uuid": "d7012f97-63c8-43ec-ac2e-ef303828d2bf", 00:20:05.987 "is_configured": true, 00:20:05.987 "data_offset": 0, 00:20:05.987 "data_size": 65536 00:20:05.987 }, 00:20:05.987 { 00:20:05.987 "name": "BaseBdev3", 00:20:05.987 "uuid": "8b9a9569-265e-4686-94a8-5365e75e5f1a", 00:20:05.987 "is_configured": true, 00:20:05.987 "data_offset": 0, 00:20:05.987 "data_size": 65536 00:20:05.987 } 00:20:05.987 ] 00:20:05.987 }' 00:20:05.987 12:19:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:05.987 12:19:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.554 12:19:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:06.554 12:19:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:06.554 12:19:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.554 12:19:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.554 12:19:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.554 12:19:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:20:06.554 12:19:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:06.554 12:19:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:06.554 12:19:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.554 12:19:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.554 12:19:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.554 12:19:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c95c1bb3-8d68-4a15-a45d-7de30ceb605d 00:20:06.554 12:19:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.554 12:19:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.812 [2024-11-25 12:19:02.662565] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:06.812 [2024-11-25 12:19:02.662625] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:06.812 [2024-11-25 12:19:02.662641] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:20:06.812 [2024-11-25 12:19:02.662966] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:20:06.812 [2024-11-25 12:19:02.667909] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:06.812 [2024-11-25 12:19:02.667938] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:20:06.812 [2024-11-25 12:19:02.668249] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:06.812 NewBaseBdev 00:20:06.812 12:19:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.812 12:19:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:20:06.812 12:19:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:20:06.812 12:19:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:06.812 12:19:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:06.812 12:19:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:06.812 12:19:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:06.812 12:19:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:06.812 12:19:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.812 12:19:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.812 12:19:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.812 12:19:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:06.812 12:19:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.812 12:19:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.812 [ 00:20:06.812 { 00:20:06.812 "name": "NewBaseBdev", 00:20:06.812 "aliases": [ 00:20:06.812 "c95c1bb3-8d68-4a15-a45d-7de30ceb605d" 00:20:06.812 ], 00:20:06.812 "product_name": "Malloc disk", 00:20:06.812 "block_size": 512, 00:20:06.812 "num_blocks": 65536, 00:20:06.812 "uuid": "c95c1bb3-8d68-4a15-a45d-7de30ceb605d", 00:20:06.812 "assigned_rate_limits": { 00:20:06.812 "rw_ios_per_sec": 0, 00:20:06.812 "rw_mbytes_per_sec": 0, 00:20:06.812 "r_mbytes_per_sec": 0, 00:20:06.812 "w_mbytes_per_sec": 0 00:20:06.812 }, 00:20:06.812 "claimed": true, 00:20:06.812 "claim_type": "exclusive_write", 00:20:06.812 "zoned": false, 00:20:06.812 "supported_io_types": { 00:20:06.812 "read": true, 00:20:06.812 "write": true, 00:20:06.812 "unmap": true, 00:20:06.812 "flush": true, 00:20:06.812 "reset": true, 00:20:06.812 "nvme_admin": false, 00:20:06.812 "nvme_io": false, 00:20:06.812 "nvme_io_md": false, 00:20:06.812 "write_zeroes": true, 00:20:06.812 "zcopy": true, 00:20:06.812 "get_zone_info": false, 00:20:06.812 "zone_management": false, 00:20:06.812 "zone_append": false, 00:20:06.812 "compare": false, 00:20:06.812 "compare_and_write": false, 00:20:06.812 "abort": true, 00:20:06.812 "seek_hole": false, 00:20:06.812 "seek_data": false, 00:20:06.812 "copy": true, 00:20:06.812 "nvme_iov_md": false 00:20:06.812 }, 00:20:06.812 "memory_domains": [ 00:20:06.812 { 00:20:06.812 "dma_device_id": "system", 00:20:06.812 "dma_device_type": 1 00:20:06.812 }, 00:20:06.812 { 00:20:06.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:06.812 "dma_device_type": 2 00:20:06.812 } 00:20:06.812 ], 00:20:06.812 "driver_specific": {} 00:20:06.812 } 00:20:06.812 ] 00:20:06.812 12:19:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.812 12:19:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:06.812 12:19:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:20:06.813 12:19:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:06.813 12:19:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:06.813 12:19:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:06.813 12:19:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:06.813 12:19:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:06.813 12:19:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:06.813 12:19:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:06.813 12:19:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:06.813 12:19:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:06.813 12:19:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:06.813 12:19:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:06.813 12:19:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.813 12:19:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.813 12:19:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.813 12:19:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:06.813 "name": "Existed_Raid", 00:20:06.813 "uuid": "6049ffbe-b288-4b4a-9f9b-567d96302953", 00:20:06.813 "strip_size_kb": 64, 00:20:06.813 "state": "online", 00:20:06.813 "raid_level": "raid5f", 00:20:06.813 "superblock": false, 00:20:06.813 "num_base_bdevs": 3, 00:20:06.813 "num_base_bdevs_discovered": 3, 00:20:06.813 "num_base_bdevs_operational": 3, 00:20:06.813 "base_bdevs_list": [ 00:20:06.813 { 00:20:06.813 "name": "NewBaseBdev", 00:20:06.813 "uuid": "c95c1bb3-8d68-4a15-a45d-7de30ceb605d", 00:20:06.813 "is_configured": true, 00:20:06.813 "data_offset": 0, 00:20:06.813 "data_size": 65536 00:20:06.813 }, 00:20:06.813 { 00:20:06.813 "name": "BaseBdev2", 00:20:06.813 "uuid": "d7012f97-63c8-43ec-ac2e-ef303828d2bf", 00:20:06.813 "is_configured": true, 00:20:06.813 "data_offset": 0, 00:20:06.813 "data_size": 65536 00:20:06.813 }, 00:20:06.813 { 00:20:06.813 "name": "BaseBdev3", 00:20:06.813 "uuid": "8b9a9569-265e-4686-94a8-5365e75e5f1a", 00:20:06.813 "is_configured": true, 00:20:06.813 "data_offset": 0, 00:20:06.813 "data_size": 65536 00:20:06.813 } 00:20:06.813 ] 00:20:06.813 }' 00:20:06.813 12:19:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:06.813 12:19:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:07.378 12:19:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:20:07.379 12:19:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:07.379 12:19:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:07.379 12:19:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:07.379 12:19:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:07.379 12:19:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:07.379 12:19:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:07.379 12:19:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:07.379 12:19:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.379 12:19:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:07.379 [2024-11-25 12:19:03.178209] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:07.379 12:19:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.379 12:19:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:07.379 "name": "Existed_Raid", 00:20:07.379 "aliases": [ 00:20:07.379 "6049ffbe-b288-4b4a-9f9b-567d96302953" 00:20:07.379 ], 00:20:07.379 "product_name": "Raid Volume", 00:20:07.379 "block_size": 512, 00:20:07.379 "num_blocks": 131072, 00:20:07.379 "uuid": "6049ffbe-b288-4b4a-9f9b-567d96302953", 00:20:07.379 "assigned_rate_limits": { 00:20:07.379 "rw_ios_per_sec": 0, 00:20:07.379 "rw_mbytes_per_sec": 0, 00:20:07.379 "r_mbytes_per_sec": 0, 00:20:07.379 "w_mbytes_per_sec": 0 00:20:07.379 }, 00:20:07.379 "claimed": false, 00:20:07.379 "zoned": false, 00:20:07.379 "supported_io_types": { 00:20:07.379 "read": true, 00:20:07.379 "write": true, 00:20:07.379 "unmap": false, 00:20:07.379 "flush": false, 00:20:07.379 "reset": true, 00:20:07.379 "nvme_admin": false, 00:20:07.379 "nvme_io": false, 00:20:07.379 "nvme_io_md": false, 00:20:07.379 "write_zeroes": true, 00:20:07.379 "zcopy": false, 00:20:07.379 "get_zone_info": false, 00:20:07.379 "zone_management": false, 00:20:07.379 "zone_append": false, 00:20:07.379 "compare": false, 00:20:07.379 "compare_and_write": false, 00:20:07.379 "abort": false, 00:20:07.379 "seek_hole": false, 00:20:07.379 "seek_data": false, 00:20:07.379 "copy": false, 00:20:07.379 "nvme_iov_md": false 00:20:07.379 }, 00:20:07.379 "driver_specific": { 00:20:07.379 "raid": { 00:20:07.379 "uuid": "6049ffbe-b288-4b4a-9f9b-567d96302953", 00:20:07.379 "strip_size_kb": 64, 00:20:07.379 "state": "online", 00:20:07.379 "raid_level": "raid5f", 00:20:07.379 "superblock": false, 00:20:07.379 "num_base_bdevs": 3, 00:20:07.379 "num_base_bdevs_discovered": 3, 00:20:07.379 "num_base_bdevs_operational": 3, 00:20:07.379 "base_bdevs_list": [ 00:20:07.379 { 00:20:07.379 "name": "NewBaseBdev", 00:20:07.379 "uuid": "c95c1bb3-8d68-4a15-a45d-7de30ceb605d", 00:20:07.379 "is_configured": true, 00:20:07.379 "data_offset": 0, 00:20:07.379 "data_size": 65536 00:20:07.379 }, 00:20:07.379 { 00:20:07.379 "name": "BaseBdev2", 00:20:07.379 "uuid": "d7012f97-63c8-43ec-ac2e-ef303828d2bf", 00:20:07.379 "is_configured": true, 00:20:07.379 "data_offset": 0, 00:20:07.379 "data_size": 65536 00:20:07.379 }, 00:20:07.379 { 00:20:07.379 "name": "BaseBdev3", 00:20:07.379 "uuid": "8b9a9569-265e-4686-94a8-5365e75e5f1a", 00:20:07.379 "is_configured": true, 00:20:07.379 "data_offset": 0, 00:20:07.379 "data_size": 65536 00:20:07.379 } 00:20:07.379 ] 00:20:07.379 } 00:20:07.379 } 00:20:07.379 }' 00:20:07.379 12:19:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:07.379 12:19:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:20:07.379 BaseBdev2 00:20:07.379 BaseBdev3' 00:20:07.379 12:19:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:07.379 12:19:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:07.379 12:19:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:07.379 12:19:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:07.379 12:19:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:20:07.379 12:19:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.379 12:19:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:07.379 12:19:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.379 12:19:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:07.379 12:19:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:07.379 12:19:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:07.379 12:19:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:07.379 12:19:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.379 12:19:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:07.379 12:19:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:07.379 12:19:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.379 12:19:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:07.379 12:19:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:07.379 12:19:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:07.379 12:19:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:07.379 12:19:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.379 12:19:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:07.379 12:19:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:07.379 12:19:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.637 12:19:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:07.637 12:19:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:07.637 12:19:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:07.637 12:19:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.637 12:19:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:07.637 [2024-11-25 12:19:03.474007] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:07.637 [2024-11-25 12:19:03.474042] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:07.637 [2024-11-25 12:19:03.474146] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:07.637 [2024-11-25 12:19:03.474537] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:07.637 [2024-11-25 12:19:03.474562] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:20:07.637 12:19:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.637 12:19:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80225 00:20:07.637 12:19:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 80225 ']' 00:20:07.637 12:19:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 80225 00:20:07.637 12:19:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:20:07.637 12:19:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:07.637 12:19:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80225 00:20:07.637 12:19:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:07.637 killing process with pid 80225 00:20:07.637 12:19:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:07.637 12:19:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80225' 00:20:07.637 12:19:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 80225 00:20:07.637 12:19:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 80225 00:20:07.637 [2024-11-25 12:19:03.507989] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:07.932 [2024-11-25 12:19:03.779661] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:08.872 12:19:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:20:08.872 00:20:08.872 real 0m11.761s 00:20:08.872 user 0m19.520s 00:20:08.872 sys 0m1.633s 00:20:08.873 12:19:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:08.873 12:19:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.873 ************************************ 00:20:08.873 END TEST raid5f_state_function_test 00:20:08.873 ************************************ 00:20:08.873 12:19:04 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:20:08.873 12:19:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:08.873 12:19:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:08.873 12:19:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:08.873 ************************************ 00:20:08.873 START TEST raid5f_state_function_test_sb 00:20:08.873 ************************************ 00:20:08.873 12:19:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:20:08.873 12:19:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:20:08.873 12:19:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:20:08.873 12:19:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:20:08.873 12:19:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:20:08.873 12:19:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:20:08.873 12:19:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:08.873 12:19:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:20:08.873 12:19:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:08.873 12:19:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:08.873 12:19:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:20:08.873 12:19:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:08.873 12:19:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:08.873 12:19:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:20:08.873 12:19:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:08.873 12:19:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:08.873 12:19:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:08.873 12:19:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:20:08.873 12:19:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:20:08.873 12:19:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:20:08.873 12:19:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:20:08.873 12:19:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:20:08.873 12:19:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:20:08.873 12:19:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:20:08.873 12:19:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:20:08.873 12:19:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:20:08.873 12:19:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:20:08.873 12:19:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80858 00:20:08.873 12:19:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:08.873 Process raid pid: 80858 00:20:08.873 12:19:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80858' 00:20:08.873 12:19:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80858 00:20:08.873 12:19:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 80858 ']' 00:20:08.873 12:19:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:08.873 12:19:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:08.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:08.873 12:19:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:08.873 12:19:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:08.873 12:19:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:08.873 [2024-11-25 12:19:04.958102] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:20:08.873 [2024-11-25 12:19:04.958295] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:09.131 [2024-11-25 12:19:05.133720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.390 [2024-11-25 12:19:05.268210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:09.390 [2024-11-25 12:19:05.479628] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:09.390 [2024-11-25 12:19:05.479676] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:09.956 12:19:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:09.956 12:19:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:20:09.956 12:19:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:09.956 12:19:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.956 12:19:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:09.956 [2024-11-25 12:19:05.963507] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:09.956 [2024-11-25 12:19:05.963568] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:09.956 [2024-11-25 12:19:05.963584] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:09.956 [2024-11-25 12:19:05.963600] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:09.957 [2024-11-25 12:19:05.963610] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:09.957 [2024-11-25 12:19:05.963624] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:09.957 12:19:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.957 12:19:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:09.957 12:19:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:09.957 12:19:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:09.957 12:19:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:09.957 12:19:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:09.957 12:19:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:09.957 12:19:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:09.957 12:19:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:09.957 12:19:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:09.957 12:19:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:09.957 12:19:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:09.957 12:19:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:09.957 12:19:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.957 12:19:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:09.957 12:19:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.957 12:19:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:09.957 "name": "Existed_Raid", 00:20:09.957 "uuid": "9c09d6a6-3c9a-4f41-9bc4-4db28258dd71", 00:20:09.957 "strip_size_kb": 64, 00:20:09.957 "state": "configuring", 00:20:09.957 "raid_level": "raid5f", 00:20:09.957 "superblock": true, 00:20:09.957 "num_base_bdevs": 3, 00:20:09.957 "num_base_bdevs_discovered": 0, 00:20:09.957 "num_base_bdevs_operational": 3, 00:20:09.957 "base_bdevs_list": [ 00:20:09.957 { 00:20:09.957 "name": "BaseBdev1", 00:20:09.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:09.957 "is_configured": false, 00:20:09.957 "data_offset": 0, 00:20:09.957 "data_size": 0 00:20:09.957 }, 00:20:09.957 { 00:20:09.957 "name": "BaseBdev2", 00:20:09.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:09.957 "is_configured": false, 00:20:09.957 "data_offset": 0, 00:20:09.957 "data_size": 0 00:20:09.957 }, 00:20:09.957 { 00:20:09.957 "name": "BaseBdev3", 00:20:09.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:09.957 "is_configured": false, 00:20:09.957 "data_offset": 0, 00:20:09.957 "data_size": 0 00:20:09.957 } 00:20:09.957 ] 00:20:09.957 }' 00:20:09.957 12:19:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:09.957 12:19:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:10.523 12:19:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:10.523 12:19:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.523 12:19:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:10.523 [2024-11-25 12:19:06.479610] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:10.523 [2024-11-25 12:19:06.479657] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:10.524 12:19:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.524 12:19:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:10.524 12:19:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.524 12:19:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:10.524 [2024-11-25 12:19:06.491606] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:10.524 [2024-11-25 12:19:06.491659] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:10.524 [2024-11-25 12:19:06.491673] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:10.524 [2024-11-25 12:19:06.491699] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:10.524 [2024-11-25 12:19:06.491709] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:10.524 [2024-11-25 12:19:06.491726] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:10.524 12:19:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.524 12:19:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:10.524 12:19:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.524 12:19:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:10.524 [2024-11-25 12:19:06.536729] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:10.524 BaseBdev1 00:20:10.524 12:19:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.524 12:19:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:10.524 12:19:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:10.524 12:19:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:10.524 12:19:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:10.524 12:19:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:10.524 12:19:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:10.524 12:19:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:10.524 12:19:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.524 12:19:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:10.524 12:19:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.524 12:19:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:10.524 12:19:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.524 12:19:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:10.524 [ 00:20:10.524 { 00:20:10.524 "name": "BaseBdev1", 00:20:10.524 "aliases": [ 00:20:10.524 "0c35f1eb-4a5b-4fda-a63a-e0d2d869f135" 00:20:10.524 ], 00:20:10.524 "product_name": "Malloc disk", 00:20:10.524 "block_size": 512, 00:20:10.524 "num_blocks": 65536, 00:20:10.524 "uuid": "0c35f1eb-4a5b-4fda-a63a-e0d2d869f135", 00:20:10.524 "assigned_rate_limits": { 00:20:10.524 "rw_ios_per_sec": 0, 00:20:10.524 "rw_mbytes_per_sec": 0, 00:20:10.524 "r_mbytes_per_sec": 0, 00:20:10.524 "w_mbytes_per_sec": 0 00:20:10.524 }, 00:20:10.524 "claimed": true, 00:20:10.524 "claim_type": "exclusive_write", 00:20:10.524 "zoned": false, 00:20:10.524 "supported_io_types": { 00:20:10.524 "read": true, 00:20:10.524 "write": true, 00:20:10.524 "unmap": true, 00:20:10.524 "flush": true, 00:20:10.524 "reset": true, 00:20:10.524 "nvme_admin": false, 00:20:10.524 "nvme_io": false, 00:20:10.524 "nvme_io_md": false, 00:20:10.524 "write_zeroes": true, 00:20:10.524 "zcopy": true, 00:20:10.524 "get_zone_info": false, 00:20:10.524 "zone_management": false, 00:20:10.524 "zone_append": false, 00:20:10.524 "compare": false, 00:20:10.524 "compare_and_write": false, 00:20:10.524 "abort": true, 00:20:10.524 "seek_hole": false, 00:20:10.524 "seek_data": false, 00:20:10.524 "copy": true, 00:20:10.524 "nvme_iov_md": false 00:20:10.524 }, 00:20:10.524 "memory_domains": [ 00:20:10.524 { 00:20:10.524 "dma_device_id": "system", 00:20:10.524 "dma_device_type": 1 00:20:10.524 }, 00:20:10.524 { 00:20:10.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:10.524 "dma_device_type": 2 00:20:10.524 } 00:20:10.524 ], 00:20:10.524 "driver_specific": {} 00:20:10.524 } 00:20:10.524 ] 00:20:10.524 12:19:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.524 12:19:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:10.524 12:19:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:10.524 12:19:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:10.524 12:19:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:10.524 12:19:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:10.524 12:19:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:10.524 12:19:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:10.524 12:19:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:10.524 12:19:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:10.524 12:19:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:10.524 12:19:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:10.524 12:19:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.524 12:19:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.524 12:19:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:10.524 12:19:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:10.524 12:19:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.783 12:19:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:10.783 "name": "Existed_Raid", 00:20:10.783 "uuid": "c1c72cb6-d5b0-48eb-ba44-2d234ebb06fd", 00:20:10.783 "strip_size_kb": 64, 00:20:10.783 "state": "configuring", 00:20:10.783 "raid_level": "raid5f", 00:20:10.783 "superblock": true, 00:20:10.783 "num_base_bdevs": 3, 00:20:10.783 "num_base_bdevs_discovered": 1, 00:20:10.783 "num_base_bdevs_operational": 3, 00:20:10.783 "base_bdevs_list": [ 00:20:10.783 { 00:20:10.783 "name": "BaseBdev1", 00:20:10.783 "uuid": "0c35f1eb-4a5b-4fda-a63a-e0d2d869f135", 00:20:10.783 "is_configured": true, 00:20:10.783 "data_offset": 2048, 00:20:10.783 "data_size": 63488 00:20:10.783 }, 00:20:10.783 { 00:20:10.783 "name": "BaseBdev2", 00:20:10.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.783 "is_configured": false, 00:20:10.783 "data_offset": 0, 00:20:10.783 "data_size": 0 00:20:10.783 }, 00:20:10.783 { 00:20:10.783 "name": "BaseBdev3", 00:20:10.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.783 "is_configured": false, 00:20:10.783 "data_offset": 0, 00:20:10.783 "data_size": 0 00:20:10.783 } 00:20:10.783 ] 00:20:10.783 }' 00:20:10.783 12:19:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:10.783 12:19:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:11.042 12:19:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:11.042 12:19:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.042 12:19:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:11.042 [2024-11-25 12:19:07.048904] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:11.042 [2024-11-25 12:19:07.048969] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:11.042 12:19:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.042 12:19:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:11.042 12:19:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.042 12:19:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:11.042 [2024-11-25 12:19:07.056969] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:11.042 [2024-11-25 12:19:07.059362] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:11.042 [2024-11-25 12:19:07.059413] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:11.042 [2024-11-25 12:19:07.059428] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:11.042 [2024-11-25 12:19:07.059444] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:11.042 12:19:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.042 12:19:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:11.042 12:19:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:11.042 12:19:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:11.042 12:19:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:11.042 12:19:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:11.042 12:19:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:11.042 12:19:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:11.042 12:19:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:11.042 12:19:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:11.042 12:19:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:11.042 12:19:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:11.042 12:19:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:11.042 12:19:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.042 12:19:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:11.042 12:19:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.042 12:19:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:11.042 12:19:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.042 12:19:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:11.042 "name": "Existed_Raid", 00:20:11.042 "uuid": "922a7c8f-c88f-491b-8e5f-1720669bbbbc", 00:20:11.042 "strip_size_kb": 64, 00:20:11.042 "state": "configuring", 00:20:11.042 "raid_level": "raid5f", 00:20:11.042 "superblock": true, 00:20:11.042 "num_base_bdevs": 3, 00:20:11.042 "num_base_bdevs_discovered": 1, 00:20:11.042 "num_base_bdevs_operational": 3, 00:20:11.042 "base_bdevs_list": [ 00:20:11.042 { 00:20:11.042 "name": "BaseBdev1", 00:20:11.042 "uuid": "0c35f1eb-4a5b-4fda-a63a-e0d2d869f135", 00:20:11.042 "is_configured": true, 00:20:11.042 "data_offset": 2048, 00:20:11.042 "data_size": 63488 00:20:11.042 }, 00:20:11.042 { 00:20:11.042 "name": "BaseBdev2", 00:20:11.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:11.042 "is_configured": false, 00:20:11.042 "data_offset": 0, 00:20:11.042 "data_size": 0 00:20:11.042 }, 00:20:11.042 { 00:20:11.042 "name": "BaseBdev3", 00:20:11.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:11.042 "is_configured": false, 00:20:11.042 "data_offset": 0, 00:20:11.042 "data_size": 0 00:20:11.042 } 00:20:11.042 ] 00:20:11.042 }' 00:20:11.042 12:19:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:11.042 12:19:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:11.610 12:19:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:11.610 12:19:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.610 12:19:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:11.610 [2024-11-25 12:19:07.591330] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:11.610 BaseBdev2 00:20:11.610 12:19:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.610 12:19:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:11.610 12:19:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:11.610 12:19:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:11.610 12:19:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:11.610 12:19:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:11.610 12:19:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:11.610 12:19:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:11.610 12:19:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.610 12:19:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:11.610 12:19:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.610 12:19:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:11.610 12:19:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.610 12:19:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:11.610 [ 00:20:11.610 { 00:20:11.610 "name": "BaseBdev2", 00:20:11.610 "aliases": [ 00:20:11.610 "cd503d92-8b44-4694-9f7a-563ca56292f2" 00:20:11.610 ], 00:20:11.610 "product_name": "Malloc disk", 00:20:11.610 "block_size": 512, 00:20:11.610 "num_blocks": 65536, 00:20:11.610 "uuid": "cd503d92-8b44-4694-9f7a-563ca56292f2", 00:20:11.610 "assigned_rate_limits": { 00:20:11.610 "rw_ios_per_sec": 0, 00:20:11.610 "rw_mbytes_per_sec": 0, 00:20:11.610 "r_mbytes_per_sec": 0, 00:20:11.610 "w_mbytes_per_sec": 0 00:20:11.610 }, 00:20:11.610 "claimed": true, 00:20:11.610 "claim_type": "exclusive_write", 00:20:11.610 "zoned": false, 00:20:11.610 "supported_io_types": { 00:20:11.610 "read": true, 00:20:11.610 "write": true, 00:20:11.610 "unmap": true, 00:20:11.610 "flush": true, 00:20:11.610 "reset": true, 00:20:11.610 "nvme_admin": false, 00:20:11.610 "nvme_io": false, 00:20:11.610 "nvme_io_md": false, 00:20:11.610 "write_zeroes": true, 00:20:11.610 "zcopy": true, 00:20:11.610 "get_zone_info": false, 00:20:11.610 "zone_management": false, 00:20:11.610 "zone_append": false, 00:20:11.610 "compare": false, 00:20:11.610 "compare_and_write": false, 00:20:11.610 "abort": true, 00:20:11.610 "seek_hole": false, 00:20:11.610 "seek_data": false, 00:20:11.610 "copy": true, 00:20:11.610 "nvme_iov_md": false 00:20:11.610 }, 00:20:11.610 "memory_domains": [ 00:20:11.610 { 00:20:11.610 "dma_device_id": "system", 00:20:11.610 "dma_device_type": 1 00:20:11.610 }, 00:20:11.610 { 00:20:11.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:11.610 "dma_device_type": 2 00:20:11.610 } 00:20:11.610 ], 00:20:11.610 "driver_specific": {} 00:20:11.610 } 00:20:11.610 ] 00:20:11.610 12:19:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.610 12:19:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:11.610 12:19:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:11.610 12:19:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:11.610 12:19:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:11.610 12:19:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:11.610 12:19:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:11.610 12:19:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:11.611 12:19:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:11.611 12:19:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:11.611 12:19:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:11.611 12:19:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:11.611 12:19:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:11.611 12:19:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:11.611 12:19:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.611 12:19:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:11.611 12:19:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.611 12:19:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:11.611 12:19:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.611 12:19:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:11.611 "name": "Existed_Raid", 00:20:11.611 "uuid": "922a7c8f-c88f-491b-8e5f-1720669bbbbc", 00:20:11.611 "strip_size_kb": 64, 00:20:11.611 "state": "configuring", 00:20:11.611 "raid_level": "raid5f", 00:20:11.611 "superblock": true, 00:20:11.611 "num_base_bdevs": 3, 00:20:11.611 "num_base_bdevs_discovered": 2, 00:20:11.611 "num_base_bdevs_operational": 3, 00:20:11.611 "base_bdevs_list": [ 00:20:11.611 { 00:20:11.611 "name": "BaseBdev1", 00:20:11.611 "uuid": "0c35f1eb-4a5b-4fda-a63a-e0d2d869f135", 00:20:11.611 "is_configured": true, 00:20:11.611 "data_offset": 2048, 00:20:11.611 "data_size": 63488 00:20:11.611 }, 00:20:11.611 { 00:20:11.611 "name": "BaseBdev2", 00:20:11.611 "uuid": "cd503d92-8b44-4694-9f7a-563ca56292f2", 00:20:11.611 "is_configured": true, 00:20:11.611 "data_offset": 2048, 00:20:11.611 "data_size": 63488 00:20:11.611 }, 00:20:11.611 { 00:20:11.611 "name": "BaseBdev3", 00:20:11.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:11.611 "is_configured": false, 00:20:11.611 "data_offset": 0, 00:20:11.611 "data_size": 0 00:20:11.611 } 00:20:11.611 ] 00:20:11.611 }' 00:20:11.611 12:19:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:11.611 12:19:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:12.177 12:19:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:12.177 12:19:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.177 12:19:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:12.177 [2024-11-25 12:19:08.181143] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:12.177 [2024-11-25 12:19:08.181630] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:12.177 [2024-11-25 12:19:08.181672] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:20:12.178 [2024-11-25 12:19:08.182016] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:12.178 BaseBdev3 00:20:12.178 12:19:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.178 12:19:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:20:12.178 12:19:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:20:12.178 12:19:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:12.178 12:19:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:12.178 12:19:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:12.178 12:19:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:12.178 12:19:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:12.178 12:19:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.178 12:19:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:12.178 [2024-11-25 12:19:08.187293] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:12.178 [2024-11-25 12:19:08.187323] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:12.178 [2024-11-25 12:19:08.187539] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:12.178 12:19:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.178 12:19:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:12.178 12:19:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.178 12:19:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:12.178 [ 00:20:12.178 { 00:20:12.178 "name": "BaseBdev3", 00:20:12.178 "aliases": [ 00:20:12.178 "ee58f68a-38ae-485e-8c75-239b6ecd9166" 00:20:12.178 ], 00:20:12.178 "product_name": "Malloc disk", 00:20:12.178 "block_size": 512, 00:20:12.178 "num_blocks": 65536, 00:20:12.178 "uuid": "ee58f68a-38ae-485e-8c75-239b6ecd9166", 00:20:12.178 "assigned_rate_limits": { 00:20:12.178 "rw_ios_per_sec": 0, 00:20:12.178 "rw_mbytes_per_sec": 0, 00:20:12.178 "r_mbytes_per_sec": 0, 00:20:12.178 "w_mbytes_per_sec": 0 00:20:12.178 }, 00:20:12.178 "claimed": true, 00:20:12.178 "claim_type": "exclusive_write", 00:20:12.178 "zoned": false, 00:20:12.178 "supported_io_types": { 00:20:12.178 "read": true, 00:20:12.178 "write": true, 00:20:12.178 "unmap": true, 00:20:12.178 "flush": true, 00:20:12.178 "reset": true, 00:20:12.178 "nvme_admin": false, 00:20:12.178 "nvme_io": false, 00:20:12.178 "nvme_io_md": false, 00:20:12.178 "write_zeroes": true, 00:20:12.178 "zcopy": true, 00:20:12.178 "get_zone_info": false, 00:20:12.178 "zone_management": false, 00:20:12.178 "zone_append": false, 00:20:12.178 "compare": false, 00:20:12.178 "compare_and_write": false, 00:20:12.178 "abort": true, 00:20:12.178 "seek_hole": false, 00:20:12.178 "seek_data": false, 00:20:12.178 "copy": true, 00:20:12.178 "nvme_iov_md": false 00:20:12.178 }, 00:20:12.178 "memory_domains": [ 00:20:12.178 { 00:20:12.178 "dma_device_id": "system", 00:20:12.178 "dma_device_type": 1 00:20:12.178 }, 00:20:12.178 { 00:20:12.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:12.178 "dma_device_type": 2 00:20:12.178 } 00:20:12.178 ], 00:20:12.178 "driver_specific": {} 00:20:12.178 } 00:20:12.178 ] 00:20:12.178 12:19:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.178 12:19:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:12.178 12:19:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:12.178 12:19:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:12.178 12:19:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:20:12.178 12:19:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:12.178 12:19:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:12.178 12:19:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:12.178 12:19:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:12.178 12:19:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:12.178 12:19:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:12.178 12:19:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:12.178 12:19:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:12.178 12:19:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:12.178 12:19:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:12.178 12:19:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:12.178 12:19:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.178 12:19:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:12.178 12:19:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.437 12:19:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:12.437 "name": "Existed_Raid", 00:20:12.437 "uuid": "922a7c8f-c88f-491b-8e5f-1720669bbbbc", 00:20:12.437 "strip_size_kb": 64, 00:20:12.437 "state": "online", 00:20:12.437 "raid_level": "raid5f", 00:20:12.437 "superblock": true, 00:20:12.437 "num_base_bdevs": 3, 00:20:12.437 "num_base_bdevs_discovered": 3, 00:20:12.437 "num_base_bdevs_operational": 3, 00:20:12.437 "base_bdevs_list": [ 00:20:12.437 { 00:20:12.437 "name": "BaseBdev1", 00:20:12.437 "uuid": "0c35f1eb-4a5b-4fda-a63a-e0d2d869f135", 00:20:12.437 "is_configured": true, 00:20:12.437 "data_offset": 2048, 00:20:12.437 "data_size": 63488 00:20:12.437 }, 00:20:12.437 { 00:20:12.437 "name": "BaseBdev2", 00:20:12.437 "uuid": "cd503d92-8b44-4694-9f7a-563ca56292f2", 00:20:12.437 "is_configured": true, 00:20:12.437 "data_offset": 2048, 00:20:12.437 "data_size": 63488 00:20:12.437 }, 00:20:12.437 { 00:20:12.437 "name": "BaseBdev3", 00:20:12.437 "uuid": "ee58f68a-38ae-485e-8c75-239b6ecd9166", 00:20:12.437 "is_configured": true, 00:20:12.437 "data_offset": 2048, 00:20:12.437 "data_size": 63488 00:20:12.437 } 00:20:12.437 ] 00:20:12.437 }' 00:20:12.437 12:19:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:12.437 12:19:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:12.694 12:19:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:12.694 12:19:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:12.694 12:19:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:12.694 12:19:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:12.694 12:19:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:20:12.694 12:19:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:12.694 12:19:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:12.694 12:19:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:12.694 12:19:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.694 12:19:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:12.694 [2024-11-25 12:19:08.709493] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:12.694 12:19:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.694 12:19:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:12.694 "name": "Existed_Raid", 00:20:12.694 "aliases": [ 00:20:12.694 "922a7c8f-c88f-491b-8e5f-1720669bbbbc" 00:20:12.694 ], 00:20:12.694 "product_name": "Raid Volume", 00:20:12.694 "block_size": 512, 00:20:12.694 "num_blocks": 126976, 00:20:12.694 "uuid": "922a7c8f-c88f-491b-8e5f-1720669bbbbc", 00:20:12.694 "assigned_rate_limits": { 00:20:12.694 "rw_ios_per_sec": 0, 00:20:12.694 "rw_mbytes_per_sec": 0, 00:20:12.694 "r_mbytes_per_sec": 0, 00:20:12.694 "w_mbytes_per_sec": 0 00:20:12.694 }, 00:20:12.694 "claimed": false, 00:20:12.694 "zoned": false, 00:20:12.694 "supported_io_types": { 00:20:12.694 "read": true, 00:20:12.694 "write": true, 00:20:12.694 "unmap": false, 00:20:12.695 "flush": false, 00:20:12.695 "reset": true, 00:20:12.695 "nvme_admin": false, 00:20:12.695 "nvme_io": false, 00:20:12.695 "nvme_io_md": false, 00:20:12.695 "write_zeroes": true, 00:20:12.695 "zcopy": false, 00:20:12.695 "get_zone_info": false, 00:20:12.695 "zone_management": false, 00:20:12.695 "zone_append": false, 00:20:12.695 "compare": false, 00:20:12.695 "compare_and_write": false, 00:20:12.695 "abort": false, 00:20:12.695 "seek_hole": false, 00:20:12.695 "seek_data": false, 00:20:12.695 "copy": false, 00:20:12.695 "nvme_iov_md": false 00:20:12.695 }, 00:20:12.695 "driver_specific": { 00:20:12.695 "raid": { 00:20:12.695 "uuid": "922a7c8f-c88f-491b-8e5f-1720669bbbbc", 00:20:12.695 "strip_size_kb": 64, 00:20:12.695 "state": "online", 00:20:12.695 "raid_level": "raid5f", 00:20:12.695 "superblock": true, 00:20:12.695 "num_base_bdevs": 3, 00:20:12.695 "num_base_bdevs_discovered": 3, 00:20:12.695 "num_base_bdevs_operational": 3, 00:20:12.695 "base_bdevs_list": [ 00:20:12.695 { 00:20:12.695 "name": "BaseBdev1", 00:20:12.695 "uuid": "0c35f1eb-4a5b-4fda-a63a-e0d2d869f135", 00:20:12.695 "is_configured": true, 00:20:12.695 "data_offset": 2048, 00:20:12.695 "data_size": 63488 00:20:12.695 }, 00:20:12.695 { 00:20:12.695 "name": "BaseBdev2", 00:20:12.695 "uuid": "cd503d92-8b44-4694-9f7a-563ca56292f2", 00:20:12.695 "is_configured": true, 00:20:12.695 "data_offset": 2048, 00:20:12.695 "data_size": 63488 00:20:12.695 }, 00:20:12.695 { 00:20:12.695 "name": "BaseBdev3", 00:20:12.695 "uuid": "ee58f68a-38ae-485e-8c75-239b6ecd9166", 00:20:12.695 "is_configured": true, 00:20:12.695 "data_offset": 2048, 00:20:12.695 "data_size": 63488 00:20:12.695 } 00:20:12.695 ] 00:20:12.695 } 00:20:12.695 } 00:20:12.695 }' 00:20:12.695 12:19:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:12.953 12:19:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:12.953 BaseBdev2 00:20:12.953 BaseBdev3' 00:20:12.953 12:19:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:12.953 12:19:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:12.953 12:19:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:12.953 12:19:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:12.953 12:19:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.953 12:19:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:12.953 12:19:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:12.953 12:19:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.953 12:19:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:12.953 12:19:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:12.953 12:19:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:12.953 12:19:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:12.953 12:19:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:12.953 12:19:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.953 12:19:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:12.953 12:19:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.953 12:19:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:12.953 12:19:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:12.953 12:19:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:12.953 12:19:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:12.953 12:19:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.953 12:19:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:12.953 12:19:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:12.953 12:19:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.953 12:19:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:12.953 12:19:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:12.953 12:19:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:12.953 12:19:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.953 12:19:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:12.953 [2024-11-25 12:19:09.013371] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:13.212 12:19:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.212 12:19:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:13.212 12:19:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:20:13.212 12:19:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:13.212 12:19:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:20:13.212 12:19:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:20:13.212 12:19:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:20:13.212 12:19:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:13.212 12:19:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:13.212 12:19:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:13.212 12:19:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:13.212 12:19:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:13.212 12:19:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:13.212 12:19:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:13.212 12:19:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:13.212 12:19:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:13.212 12:19:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.212 12:19:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:13.212 12:19:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.212 12:19:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:13.212 12:19:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.212 12:19:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:13.212 "name": "Existed_Raid", 00:20:13.212 "uuid": "922a7c8f-c88f-491b-8e5f-1720669bbbbc", 00:20:13.212 "strip_size_kb": 64, 00:20:13.212 "state": "online", 00:20:13.212 "raid_level": "raid5f", 00:20:13.212 "superblock": true, 00:20:13.212 "num_base_bdevs": 3, 00:20:13.212 "num_base_bdevs_discovered": 2, 00:20:13.212 "num_base_bdevs_operational": 2, 00:20:13.212 "base_bdevs_list": [ 00:20:13.212 { 00:20:13.212 "name": null, 00:20:13.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:13.212 "is_configured": false, 00:20:13.212 "data_offset": 0, 00:20:13.212 "data_size": 63488 00:20:13.212 }, 00:20:13.212 { 00:20:13.212 "name": "BaseBdev2", 00:20:13.212 "uuid": "cd503d92-8b44-4694-9f7a-563ca56292f2", 00:20:13.212 "is_configured": true, 00:20:13.212 "data_offset": 2048, 00:20:13.212 "data_size": 63488 00:20:13.212 }, 00:20:13.212 { 00:20:13.212 "name": "BaseBdev3", 00:20:13.212 "uuid": "ee58f68a-38ae-485e-8c75-239b6ecd9166", 00:20:13.212 "is_configured": true, 00:20:13.212 "data_offset": 2048, 00:20:13.212 "data_size": 63488 00:20:13.212 } 00:20:13.212 ] 00:20:13.212 }' 00:20:13.212 12:19:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:13.212 12:19:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:13.788 12:19:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:13.788 12:19:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:13.788 12:19:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.788 12:19:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:13.788 12:19:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.788 12:19:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:13.788 12:19:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.788 12:19:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:13.788 12:19:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:13.788 12:19:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:13.788 12:19:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.788 12:19:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:13.788 [2024-11-25 12:19:09.638751] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:13.788 [2024-11-25 12:19:09.638937] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:13.788 [2024-11-25 12:19:09.721024] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:13.788 12:19:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.788 12:19:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:13.788 12:19:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:13.788 12:19:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:13.788 12:19:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.788 12:19:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.788 12:19:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:13.788 12:19:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.788 12:19:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:13.788 12:19:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:13.788 12:19:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:20:13.788 12:19:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.788 12:19:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:13.788 [2024-11-25 12:19:09.769105] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:13.788 [2024-11-25 12:19:09.769164] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:13.788 12:19:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.788 12:19:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:13.788 12:19:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:13.788 12:19:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.788 12:19:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.788 12:19:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:13.788 12:19:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:13.788 12:19:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.047 12:19:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:14.047 12:19:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:14.047 12:19:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:20:14.047 12:19:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:20:14.047 12:19:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:14.047 12:19:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:14.047 12:19:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.047 12:19:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:14.047 BaseBdev2 00:20:14.047 12:19:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.047 12:19:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:20:14.047 12:19:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:14.047 12:19:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:14.047 12:19:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:14.047 12:19:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:14.047 12:19:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:14.047 12:19:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:14.047 12:19:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.047 12:19:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:14.047 12:19:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.047 12:19:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:14.047 12:19:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.047 12:19:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:14.047 [ 00:20:14.047 { 00:20:14.047 "name": "BaseBdev2", 00:20:14.047 "aliases": [ 00:20:14.047 "23548a78-2079-40eb-ba5c-ca55d857a098" 00:20:14.047 ], 00:20:14.047 "product_name": "Malloc disk", 00:20:14.047 "block_size": 512, 00:20:14.047 "num_blocks": 65536, 00:20:14.047 "uuid": "23548a78-2079-40eb-ba5c-ca55d857a098", 00:20:14.047 "assigned_rate_limits": { 00:20:14.047 "rw_ios_per_sec": 0, 00:20:14.047 "rw_mbytes_per_sec": 0, 00:20:14.047 "r_mbytes_per_sec": 0, 00:20:14.047 "w_mbytes_per_sec": 0 00:20:14.047 }, 00:20:14.047 "claimed": false, 00:20:14.047 "zoned": false, 00:20:14.047 "supported_io_types": { 00:20:14.047 "read": true, 00:20:14.047 "write": true, 00:20:14.047 "unmap": true, 00:20:14.047 "flush": true, 00:20:14.047 "reset": true, 00:20:14.047 "nvme_admin": false, 00:20:14.047 "nvme_io": false, 00:20:14.047 "nvme_io_md": false, 00:20:14.047 "write_zeroes": true, 00:20:14.047 "zcopy": true, 00:20:14.047 "get_zone_info": false, 00:20:14.047 "zone_management": false, 00:20:14.047 "zone_append": false, 00:20:14.047 "compare": false, 00:20:14.047 "compare_and_write": false, 00:20:14.047 "abort": true, 00:20:14.047 "seek_hole": false, 00:20:14.047 "seek_data": false, 00:20:14.047 "copy": true, 00:20:14.047 "nvme_iov_md": false 00:20:14.047 }, 00:20:14.047 "memory_domains": [ 00:20:14.047 { 00:20:14.047 "dma_device_id": "system", 00:20:14.047 "dma_device_type": 1 00:20:14.047 }, 00:20:14.047 { 00:20:14.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:14.047 "dma_device_type": 2 00:20:14.047 } 00:20:14.047 ], 00:20:14.047 "driver_specific": {} 00:20:14.047 } 00:20:14.047 ] 00:20:14.047 12:19:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.047 12:19:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:14.047 12:19:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:14.047 12:19:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:14.047 12:19:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:14.047 12:19:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.047 12:19:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:14.047 BaseBdev3 00:20:14.047 12:19:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.047 12:19:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:20:14.047 12:19:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:20:14.047 12:19:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:14.047 12:19:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:14.047 12:19:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:14.047 12:19:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:14.047 12:19:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:14.047 12:19:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.047 12:19:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:14.047 12:19:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.047 12:19:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:14.047 12:19:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.047 12:19:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:14.047 [ 00:20:14.047 { 00:20:14.047 "name": "BaseBdev3", 00:20:14.047 "aliases": [ 00:20:14.047 "fcc8942a-d12c-4902-95a7-0b90fa1a3a5e" 00:20:14.047 ], 00:20:14.047 "product_name": "Malloc disk", 00:20:14.047 "block_size": 512, 00:20:14.047 "num_blocks": 65536, 00:20:14.047 "uuid": "fcc8942a-d12c-4902-95a7-0b90fa1a3a5e", 00:20:14.047 "assigned_rate_limits": { 00:20:14.047 "rw_ios_per_sec": 0, 00:20:14.047 "rw_mbytes_per_sec": 0, 00:20:14.047 "r_mbytes_per_sec": 0, 00:20:14.047 "w_mbytes_per_sec": 0 00:20:14.047 }, 00:20:14.047 "claimed": false, 00:20:14.047 "zoned": false, 00:20:14.047 "supported_io_types": { 00:20:14.047 "read": true, 00:20:14.047 "write": true, 00:20:14.047 "unmap": true, 00:20:14.047 "flush": true, 00:20:14.047 "reset": true, 00:20:14.047 "nvme_admin": false, 00:20:14.047 "nvme_io": false, 00:20:14.047 "nvme_io_md": false, 00:20:14.047 "write_zeroes": true, 00:20:14.047 "zcopy": true, 00:20:14.047 "get_zone_info": false, 00:20:14.047 "zone_management": false, 00:20:14.047 "zone_append": false, 00:20:14.047 "compare": false, 00:20:14.047 "compare_and_write": false, 00:20:14.047 "abort": true, 00:20:14.047 "seek_hole": false, 00:20:14.048 "seek_data": false, 00:20:14.048 "copy": true, 00:20:14.048 "nvme_iov_md": false 00:20:14.048 }, 00:20:14.048 "memory_domains": [ 00:20:14.048 { 00:20:14.048 "dma_device_id": "system", 00:20:14.048 "dma_device_type": 1 00:20:14.048 }, 00:20:14.048 { 00:20:14.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:14.048 "dma_device_type": 2 00:20:14.048 } 00:20:14.048 ], 00:20:14.048 "driver_specific": {} 00:20:14.048 } 00:20:14.048 ] 00:20:14.048 12:19:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.048 12:19:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:14.048 12:19:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:14.048 12:19:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:14.048 12:19:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:14.048 12:19:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.048 12:19:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:14.048 [2024-11-25 12:19:10.053402] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:14.048 [2024-11-25 12:19:10.053457] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:14.048 [2024-11-25 12:19:10.053488] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:14.048 [2024-11-25 12:19:10.055875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:14.048 12:19:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.048 12:19:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:14.048 12:19:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:14.048 12:19:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:14.048 12:19:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:14.048 12:19:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:14.048 12:19:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:14.048 12:19:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:14.048 12:19:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:14.048 12:19:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:14.048 12:19:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:14.048 12:19:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:14.048 12:19:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:14.048 12:19:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.048 12:19:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:14.048 12:19:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.048 12:19:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:14.048 "name": "Existed_Raid", 00:20:14.048 "uuid": "024da454-abe7-4c22-b230-5eaf73a25f61", 00:20:14.048 "strip_size_kb": 64, 00:20:14.048 "state": "configuring", 00:20:14.048 "raid_level": "raid5f", 00:20:14.048 "superblock": true, 00:20:14.048 "num_base_bdevs": 3, 00:20:14.048 "num_base_bdevs_discovered": 2, 00:20:14.048 "num_base_bdevs_operational": 3, 00:20:14.048 "base_bdevs_list": [ 00:20:14.048 { 00:20:14.048 "name": "BaseBdev1", 00:20:14.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:14.048 "is_configured": false, 00:20:14.048 "data_offset": 0, 00:20:14.048 "data_size": 0 00:20:14.048 }, 00:20:14.048 { 00:20:14.048 "name": "BaseBdev2", 00:20:14.048 "uuid": "23548a78-2079-40eb-ba5c-ca55d857a098", 00:20:14.048 "is_configured": true, 00:20:14.048 "data_offset": 2048, 00:20:14.048 "data_size": 63488 00:20:14.048 }, 00:20:14.048 { 00:20:14.048 "name": "BaseBdev3", 00:20:14.048 "uuid": "fcc8942a-d12c-4902-95a7-0b90fa1a3a5e", 00:20:14.048 "is_configured": true, 00:20:14.048 "data_offset": 2048, 00:20:14.048 "data_size": 63488 00:20:14.048 } 00:20:14.048 ] 00:20:14.048 }' 00:20:14.048 12:19:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:14.048 12:19:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:14.614 12:19:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:20:14.614 12:19:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.614 12:19:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:14.614 [2024-11-25 12:19:10.581506] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:14.614 12:19:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.614 12:19:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:14.614 12:19:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:14.614 12:19:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:14.614 12:19:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:14.614 12:19:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:14.614 12:19:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:14.614 12:19:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:14.614 12:19:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:14.614 12:19:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:14.614 12:19:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:14.614 12:19:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:14.614 12:19:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:14.614 12:19:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.614 12:19:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:14.614 12:19:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.614 12:19:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:14.614 "name": "Existed_Raid", 00:20:14.614 "uuid": "024da454-abe7-4c22-b230-5eaf73a25f61", 00:20:14.614 "strip_size_kb": 64, 00:20:14.614 "state": "configuring", 00:20:14.614 "raid_level": "raid5f", 00:20:14.614 "superblock": true, 00:20:14.614 "num_base_bdevs": 3, 00:20:14.614 "num_base_bdevs_discovered": 1, 00:20:14.614 "num_base_bdevs_operational": 3, 00:20:14.614 "base_bdevs_list": [ 00:20:14.614 { 00:20:14.614 "name": "BaseBdev1", 00:20:14.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:14.614 "is_configured": false, 00:20:14.614 "data_offset": 0, 00:20:14.614 "data_size": 0 00:20:14.614 }, 00:20:14.614 { 00:20:14.614 "name": null, 00:20:14.614 "uuid": "23548a78-2079-40eb-ba5c-ca55d857a098", 00:20:14.614 "is_configured": false, 00:20:14.614 "data_offset": 0, 00:20:14.614 "data_size": 63488 00:20:14.614 }, 00:20:14.614 { 00:20:14.614 "name": "BaseBdev3", 00:20:14.614 "uuid": "fcc8942a-d12c-4902-95a7-0b90fa1a3a5e", 00:20:14.614 "is_configured": true, 00:20:14.614 "data_offset": 2048, 00:20:14.614 "data_size": 63488 00:20:14.614 } 00:20:14.614 ] 00:20:14.614 }' 00:20:14.614 12:19:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:14.614 12:19:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:15.181 12:19:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:15.181 12:19:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.181 12:19:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:15.181 12:19:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:15.181 12:19:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.181 12:19:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:20:15.181 12:19:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:15.181 12:19:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.181 12:19:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:15.181 [2024-11-25 12:19:11.199479] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:15.181 BaseBdev1 00:20:15.181 12:19:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.181 12:19:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:20:15.181 12:19:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:15.181 12:19:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:15.181 12:19:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:15.181 12:19:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:15.181 12:19:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:15.181 12:19:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:15.181 12:19:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.181 12:19:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:15.181 12:19:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.181 12:19:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:15.181 12:19:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.181 12:19:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:15.181 [ 00:20:15.181 { 00:20:15.181 "name": "BaseBdev1", 00:20:15.181 "aliases": [ 00:20:15.181 "758f77f7-2bda-4570-b9f0-99f043bd302b" 00:20:15.181 ], 00:20:15.181 "product_name": "Malloc disk", 00:20:15.181 "block_size": 512, 00:20:15.181 "num_blocks": 65536, 00:20:15.181 "uuid": "758f77f7-2bda-4570-b9f0-99f043bd302b", 00:20:15.181 "assigned_rate_limits": { 00:20:15.181 "rw_ios_per_sec": 0, 00:20:15.181 "rw_mbytes_per_sec": 0, 00:20:15.181 "r_mbytes_per_sec": 0, 00:20:15.181 "w_mbytes_per_sec": 0 00:20:15.181 }, 00:20:15.181 "claimed": true, 00:20:15.181 "claim_type": "exclusive_write", 00:20:15.181 "zoned": false, 00:20:15.181 "supported_io_types": { 00:20:15.181 "read": true, 00:20:15.181 "write": true, 00:20:15.181 "unmap": true, 00:20:15.181 "flush": true, 00:20:15.181 "reset": true, 00:20:15.181 "nvme_admin": false, 00:20:15.181 "nvme_io": false, 00:20:15.181 "nvme_io_md": false, 00:20:15.181 "write_zeroes": true, 00:20:15.181 "zcopy": true, 00:20:15.181 "get_zone_info": false, 00:20:15.181 "zone_management": false, 00:20:15.181 "zone_append": false, 00:20:15.181 "compare": false, 00:20:15.181 "compare_and_write": false, 00:20:15.181 "abort": true, 00:20:15.181 "seek_hole": false, 00:20:15.181 "seek_data": false, 00:20:15.181 "copy": true, 00:20:15.181 "nvme_iov_md": false 00:20:15.181 }, 00:20:15.181 "memory_domains": [ 00:20:15.181 { 00:20:15.181 "dma_device_id": "system", 00:20:15.181 "dma_device_type": 1 00:20:15.181 }, 00:20:15.181 { 00:20:15.181 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:15.181 "dma_device_type": 2 00:20:15.181 } 00:20:15.181 ], 00:20:15.181 "driver_specific": {} 00:20:15.181 } 00:20:15.181 ] 00:20:15.181 12:19:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.181 12:19:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:15.181 12:19:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:15.182 12:19:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:15.182 12:19:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:15.182 12:19:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:15.182 12:19:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:15.182 12:19:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:15.182 12:19:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:15.182 12:19:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:15.182 12:19:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:15.182 12:19:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:15.182 12:19:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:15.182 12:19:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.182 12:19:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:15.182 12:19:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:15.182 12:19:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.440 12:19:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:15.440 "name": "Existed_Raid", 00:20:15.440 "uuid": "024da454-abe7-4c22-b230-5eaf73a25f61", 00:20:15.440 "strip_size_kb": 64, 00:20:15.440 "state": "configuring", 00:20:15.440 "raid_level": "raid5f", 00:20:15.440 "superblock": true, 00:20:15.440 "num_base_bdevs": 3, 00:20:15.440 "num_base_bdevs_discovered": 2, 00:20:15.440 "num_base_bdevs_operational": 3, 00:20:15.440 "base_bdevs_list": [ 00:20:15.440 { 00:20:15.440 "name": "BaseBdev1", 00:20:15.440 "uuid": "758f77f7-2bda-4570-b9f0-99f043bd302b", 00:20:15.440 "is_configured": true, 00:20:15.440 "data_offset": 2048, 00:20:15.440 "data_size": 63488 00:20:15.440 }, 00:20:15.440 { 00:20:15.440 "name": null, 00:20:15.440 "uuid": "23548a78-2079-40eb-ba5c-ca55d857a098", 00:20:15.440 "is_configured": false, 00:20:15.440 "data_offset": 0, 00:20:15.440 "data_size": 63488 00:20:15.440 }, 00:20:15.440 { 00:20:15.440 "name": "BaseBdev3", 00:20:15.440 "uuid": "fcc8942a-d12c-4902-95a7-0b90fa1a3a5e", 00:20:15.440 "is_configured": true, 00:20:15.440 "data_offset": 2048, 00:20:15.440 "data_size": 63488 00:20:15.440 } 00:20:15.440 ] 00:20:15.440 }' 00:20:15.440 12:19:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:15.440 12:19:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:15.698 12:19:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:15.698 12:19:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:15.698 12:19:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.698 12:19:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:15.698 12:19:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.956 12:19:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:20:15.956 12:19:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:20:15.956 12:19:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.956 12:19:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:15.956 [2024-11-25 12:19:11.815687] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:15.956 12:19:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.956 12:19:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:15.956 12:19:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:15.956 12:19:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:15.956 12:19:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:15.956 12:19:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:15.956 12:19:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:15.956 12:19:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:15.956 12:19:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:15.956 12:19:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:15.956 12:19:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:15.956 12:19:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:15.956 12:19:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.956 12:19:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:15.956 12:19:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:15.956 12:19:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.956 12:19:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:15.956 "name": "Existed_Raid", 00:20:15.956 "uuid": "024da454-abe7-4c22-b230-5eaf73a25f61", 00:20:15.956 "strip_size_kb": 64, 00:20:15.956 "state": "configuring", 00:20:15.956 "raid_level": "raid5f", 00:20:15.956 "superblock": true, 00:20:15.956 "num_base_bdevs": 3, 00:20:15.956 "num_base_bdevs_discovered": 1, 00:20:15.956 "num_base_bdevs_operational": 3, 00:20:15.956 "base_bdevs_list": [ 00:20:15.956 { 00:20:15.956 "name": "BaseBdev1", 00:20:15.956 "uuid": "758f77f7-2bda-4570-b9f0-99f043bd302b", 00:20:15.956 "is_configured": true, 00:20:15.956 "data_offset": 2048, 00:20:15.956 "data_size": 63488 00:20:15.956 }, 00:20:15.956 { 00:20:15.956 "name": null, 00:20:15.956 "uuid": "23548a78-2079-40eb-ba5c-ca55d857a098", 00:20:15.956 "is_configured": false, 00:20:15.956 "data_offset": 0, 00:20:15.957 "data_size": 63488 00:20:15.957 }, 00:20:15.957 { 00:20:15.957 "name": null, 00:20:15.957 "uuid": "fcc8942a-d12c-4902-95a7-0b90fa1a3a5e", 00:20:15.957 "is_configured": false, 00:20:15.957 "data_offset": 0, 00:20:15.957 "data_size": 63488 00:20:15.957 } 00:20:15.957 ] 00:20:15.957 }' 00:20:15.957 12:19:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:15.957 12:19:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:16.524 12:19:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.524 12:19:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:16.524 12:19:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.524 12:19:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:16.524 12:19:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.524 12:19:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:20:16.524 12:19:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:16.524 12:19:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.524 12:19:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:16.524 [2024-11-25 12:19:12.387909] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:16.524 12:19:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.524 12:19:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:16.524 12:19:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:16.524 12:19:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:16.524 12:19:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:16.524 12:19:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:16.524 12:19:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:16.524 12:19:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:16.524 12:19:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:16.524 12:19:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:16.524 12:19:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:16.524 12:19:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.524 12:19:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.524 12:19:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:16.524 12:19:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:16.524 12:19:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.524 12:19:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:16.524 "name": "Existed_Raid", 00:20:16.524 "uuid": "024da454-abe7-4c22-b230-5eaf73a25f61", 00:20:16.524 "strip_size_kb": 64, 00:20:16.524 "state": "configuring", 00:20:16.525 "raid_level": "raid5f", 00:20:16.525 "superblock": true, 00:20:16.525 "num_base_bdevs": 3, 00:20:16.525 "num_base_bdevs_discovered": 2, 00:20:16.525 "num_base_bdevs_operational": 3, 00:20:16.525 "base_bdevs_list": [ 00:20:16.525 { 00:20:16.525 "name": "BaseBdev1", 00:20:16.525 "uuid": "758f77f7-2bda-4570-b9f0-99f043bd302b", 00:20:16.525 "is_configured": true, 00:20:16.525 "data_offset": 2048, 00:20:16.525 "data_size": 63488 00:20:16.525 }, 00:20:16.525 { 00:20:16.525 "name": null, 00:20:16.525 "uuid": "23548a78-2079-40eb-ba5c-ca55d857a098", 00:20:16.525 "is_configured": false, 00:20:16.525 "data_offset": 0, 00:20:16.525 "data_size": 63488 00:20:16.525 }, 00:20:16.525 { 00:20:16.525 "name": "BaseBdev3", 00:20:16.525 "uuid": "fcc8942a-d12c-4902-95a7-0b90fa1a3a5e", 00:20:16.525 "is_configured": true, 00:20:16.525 "data_offset": 2048, 00:20:16.525 "data_size": 63488 00:20:16.525 } 00:20:16.525 ] 00:20:16.525 }' 00:20:16.525 12:19:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:16.525 12:19:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:17.093 12:19:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.093 12:19:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.093 12:19:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:17.093 12:19:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:17.093 12:19:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.093 12:19:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:20:17.093 12:19:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:17.093 12:19:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.093 12:19:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:17.093 [2024-11-25 12:19:12.960157] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:17.093 12:19:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.093 12:19:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:17.093 12:19:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:17.093 12:19:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:17.093 12:19:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:17.093 12:19:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:17.093 12:19:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:17.093 12:19:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:17.093 12:19:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:17.093 12:19:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:17.093 12:19:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:17.093 12:19:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.093 12:19:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:17.093 12:19:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.093 12:19:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:17.093 12:19:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.093 12:19:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:17.093 "name": "Existed_Raid", 00:20:17.093 "uuid": "024da454-abe7-4c22-b230-5eaf73a25f61", 00:20:17.093 "strip_size_kb": 64, 00:20:17.093 "state": "configuring", 00:20:17.093 "raid_level": "raid5f", 00:20:17.093 "superblock": true, 00:20:17.093 "num_base_bdevs": 3, 00:20:17.093 "num_base_bdevs_discovered": 1, 00:20:17.093 "num_base_bdevs_operational": 3, 00:20:17.093 "base_bdevs_list": [ 00:20:17.093 { 00:20:17.093 "name": null, 00:20:17.093 "uuid": "758f77f7-2bda-4570-b9f0-99f043bd302b", 00:20:17.093 "is_configured": false, 00:20:17.093 "data_offset": 0, 00:20:17.093 "data_size": 63488 00:20:17.093 }, 00:20:17.093 { 00:20:17.093 "name": null, 00:20:17.093 "uuid": "23548a78-2079-40eb-ba5c-ca55d857a098", 00:20:17.093 "is_configured": false, 00:20:17.093 "data_offset": 0, 00:20:17.093 "data_size": 63488 00:20:17.093 }, 00:20:17.093 { 00:20:17.093 "name": "BaseBdev3", 00:20:17.093 "uuid": "fcc8942a-d12c-4902-95a7-0b90fa1a3a5e", 00:20:17.093 "is_configured": true, 00:20:17.093 "data_offset": 2048, 00:20:17.093 "data_size": 63488 00:20:17.093 } 00:20:17.093 ] 00:20:17.093 }' 00:20:17.093 12:19:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:17.093 12:19:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:17.661 12:19:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:17.661 12:19:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.661 12:19:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.661 12:19:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:17.661 12:19:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.661 12:19:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:20:17.661 12:19:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:17.661 12:19:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.661 12:19:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:17.661 [2024-11-25 12:19:13.583959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:17.661 12:19:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.661 12:19:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:17.661 12:19:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:17.661 12:19:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:17.661 12:19:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:17.661 12:19:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:17.661 12:19:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:17.661 12:19:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:17.661 12:19:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:17.661 12:19:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:17.661 12:19:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:17.661 12:19:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:17.661 12:19:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.661 12:19:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.661 12:19:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:17.661 12:19:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.661 12:19:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:17.661 "name": "Existed_Raid", 00:20:17.661 "uuid": "024da454-abe7-4c22-b230-5eaf73a25f61", 00:20:17.661 "strip_size_kb": 64, 00:20:17.661 "state": "configuring", 00:20:17.661 "raid_level": "raid5f", 00:20:17.661 "superblock": true, 00:20:17.661 "num_base_bdevs": 3, 00:20:17.661 "num_base_bdevs_discovered": 2, 00:20:17.661 "num_base_bdevs_operational": 3, 00:20:17.661 "base_bdevs_list": [ 00:20:17.661 { 00:20:17.661 "name": null, 00:20:17.661 "uuid": "758f77f7-2bda-4570-b9f0-99f043bd302b", 00:20:17.661 "is_configured": false, 00:20:17.661 "data_offset": 0, 00:20:17.661 "data_size": 63488 00:20:17.661 }, 00:20:17.661 { 00:20:17.661 "name": "BaseBdev2", 00:20:17.661 "uuid": "23548a78-2079-40eb-ba5c-ca55d857a098", 00:20:17.661 "is_configured": true, 00:20:17.661 "data_offset": 2048, 00:20:17.661 "data_size": 63488 00:20:17.661 }, 00:20:17.661 { 00:20:17.661 "name": "BaseBdev3", 00:20:17.661 "uuid": "fcc8942a-d12c-4902-95a7-0b90fa1a3a5e", 00:20:17.661 "is_configured": true, 00:20:17.661 "data_offset": 2048, 00:20:17.661 "data_size": 63488 00:20:17.661 } 00:20:17.661 ] 00:20:17.661 }' 00:20:17.661 12:19:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:17.661 12:19:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.229 12:19:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.229 12:19:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:18.229 12:19:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.229 12:19:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.229 12:19:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.229 12:19:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:20:18.229 12:19:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.229 12:19:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.229 12:19:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.229 12:19:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:18.229 12:19:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.229 12:19:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 758f77f7-2bda-4570-b9f0-99f043bd302b 00:20:18.229 12:19:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.229 12:19:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.229 [2024-11-25 12:19:14.221631] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:18.229 [2024-11-25 12:19:14.221915] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:18.229 [2024-11-25 12:19:14.221946] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:20:18.229 [2024-11-25 12:19:14.222262] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:20:18.229 NewBaseBdev 00:20:18.229 12:19:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.229 12:19:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:20:18.229 12:19:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:20:18.229 12:19:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:18.229 12:19:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:18.229 12:19:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:18.229 12:19:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:18.229 12:19:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:18.229 12:19:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.229 12:19:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.229 [2024-11-25 12:19:14.227145] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:18.229 [2024-11-25 12:19:14.227182] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:20:18.229 [2024-11-25 12:19:14.227507] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:18.229 12:19:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.229 12:19:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:18.229 12:19:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.229 12:19:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.229 [ 00:20:18.229 { 00:20:18.229 "name": "NewBaseBdev", 00:20:18.229 "aliases": [ 00:20:18.229 "758f77f7-2bda-4570-b9f0-99f043bd302b" 00:20:18.229 ], 00:20:18.229 "product_name": "Malloc disk", 00:20:18.229 "block_size": 512, 00:20:18.229 "num_blocks": 65536, 00:20:18.229 "uuid": "758f77f7-2bda-4570-b9f0-99f043bd302b", 00:20:18.229 "assigned_rate_limits": { 00:20:18.229 "rw_ios_per_sec": 0, 00:20:18.229 "rw_mbytes_per_sec": 0, 00:20:18.229 "r_mbytes_per_sec": 0, 00:20:18.229 "w_mbytes_per_sec": 0 00:20:18.229 }, 00:20:18.229 "claimed": true, 00:20:18.229 "claim_type": "exclusive_write", 00:20:18.229 "zoned": false, 00:20:18.229 "supported_io_types": { 00:20:18.229 "read": true, 00:20:18.229 "write": true, 00:20:18.229 "unmap": true, 00:20:18.229 "flush": true, 00:20:18.229 "reset": true, 00:20:18.229 "nvme_admin": false, 00:20:18.229 "nvme_io": false, 00:20:18.229 "nvme_io_md": false, 00:20:18.229 "write_zeroes": true, 00:20:18.229 "zcopy": true, 00:20:18.229 "get_zone_info": false, 00:20:18.229 "zone_management": false, 00:20:18.229 "zone_append": false, 00:20:18.229 "compare": false, 00:20:18.229 "compare_and_write": false, 00:20:18.229 "abort": true, 00:20:18.229 "seek_hole": false, 00:20:18.229 "seek_data": false, 00:20:18.229 "copy": true, 00:20:18.229 "nvme_iov_md": false 00:20:18.229 }, 00:20:18.229 "memory_domains": [ 00:20:18.229 { 00:20:18.229 "dma_device_id": "system", 00:20:18.229 "dma_device_type": 1 00:20:18.229 }, 00:20:18.229 { 00:20:18.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:18.229 "dma_device_type": 2 00:20:18.229 } 00:20:18.229 ], 00:20:18.229 "driver_specific": {} 00:20:18.229 } 00:20:18.229 ] 00:20:18.229 12:19:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.229 12:19:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:18.229 12:19:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:20:18.229 12:19:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:18.229 12:19:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:18.229 12:19:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:18.229 12:19:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:18.229 12:19:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:18.229 12:19:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:18.229 12:19:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:18.229 12:19:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:18.229 12:19:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:18.229 12:19:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.229 12:19:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.229 12:19:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.229 12:19:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:18.229 12:19:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.229 12:19:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:18.229 "name": "Existed_Raid", 00:20:18.229 "uuid": "024da454-abe7-4c22-b230-5eaf73a25f61", 00:20:18.229 "strip_size_kb": 64, 00:20:18.229 "state": "online", 00:20:18.229 "raid_level": "raid5f", 00:20:18.229 "superblock": true, 00:20:18.229 "num_base_bdevs": 3, 00:20:18.230 "num_base_bdevs_discovered": 3, 00:20:18.230 "num_base_bdevs_operational": 3, 00:20:18.230 "base_bdevs_list": [ 00:20:18.230 { 00:20:18.230 "name": "NewBaseBdev", 00:20:18.230 "uuid": "758f77f7-2bda-4570-b9f0-99f043bd302b", 00:20:18.230 "is_configured": true, 00:20:18.230 "data_offset": 2048, 00:20:18.230 "data_size": 63488 00:20:18.230 }, 00:20:18.230 { 00:20:18.230 "name": "BaseBdev2", 00:20:18.230 "uuid": "23548a78-2079-40eb-ba5c-ca55d857a098", 00:20:18.230 "is_configured": true, 00:20:18.230 "data_offset": 2048, 00:20:18.230 "data_size": 63488 00:20:18.230 }, 00:20:18.230 { 00:20:18.230 "name": "BaseBdev3", 00:20:18.230 "uuid": "fcc8942a-d12c-4902-95a7-0b90fa1a3a5e", 00:20:18.230 "is_configured": true, 00:20:18.230 "data_offset": 2048, 00:20:18.230 "data_size": 63488 00:20:18.230 } 00:20:18.230 ] 00:20:18.230 }' 00:20:18.230 12:19:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:18.230 12:19:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.797 12:19:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:20:18.797 12:19:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:18.797 12:19:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:18.797 12:19:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:18.797 12:19:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:20:18.797 12:19:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:18.797 12:19:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:18.797 12:19:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.797 12:19:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.797 12:19:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:18.797 [2024-11-25 12:19:14.785477] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:18.797 12:19:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.797 12:19:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:18.797 "name": "Existed_Raid", 00:20:18.797 "aliases": [ 00:20:18.797 "024da454-abe7-4c22-b230-5eaf73a25f61" 00:20:18.797 ], 00:20:18.797 "product_name": "Raid Volume", 00:20:18.797 "block_size": 512, 00:20:18.797 "num_blocks": 126976, 00:20:18.797 "uuid": "024da454-abe7-4c22-b230-5eaf73a25f61", 00:20:18.797 "assigned_rate_limits": { 00:20:18.797 "rw_ios_per_sec": 0, 00:20:18.797 "rw_mbytes_per_sec": 0, 00:20:18.797 "r_mbytes_per_sec": 0, 00:20:18.797 "w_mbytes_per_sec": 0 00:20:18.797 }, 00:20:18.797 "claimed": false, 00:20:18.797 "zoned": false, 00:20:18.797 "supported_io_types": { 00:20:18.797 "read": true, 00:20:18.797 "write": true, 00:20:18.797 "unmap": false, 00:20:18.797 "flush": false, 00:20:18.797 "reset": true, 00:20:18.797 "nvme_admin": false, 00:20:18.797 "nvme_io": false, 00:20:18.797 "nvme_io_md": false, 00:20:18.797 "write_zeroes": true, 00:20:18.797 "zcopy": false, 00:20:18.797 "get_zone_info": false, 00:20:18.797 "zone_management": false, 00:20:18.797 "zone_append": false, 00:20:18.797 "compare": false, 00:20:18.797 "compare_and_write": false, 00:20:18.797 "abort": false, 00:20:18.798 "seek_hole": false, 00:20:18.798 "seek_data": false, 00:20:18.798 "copy": false, 00:20:18.798 "nvme_iov_md": false 00:20:18.798 }, 00:20:18.798 "driver_specific": { 00:20:18.798 "raid": { 00:20:18.798 "uuid": "024da454-abe7-4c22-b230-5eaf73a25f61", 00:20:18.798 "strip_size_kb": 64, 00:20:18.798 "state": "online", 00:20:18.798 "raid_level": "raid5f", 00:20:18.798 "superblock": true, 00:20:18.798 "num_base_bdevs": 3, 00:20:18.798 "num_base_bdevs_discovered": 3, 00:20:18.798 "num_base_bdevs_operational": 3, 00:20:18.798 "base_bdevs_list": [ 00:20:18.798 { 00:20:18.798 "name": "NewBaseBdev", 00:20:18.798 "uuid": "758f77f7-2bda-4570-b9f0-99f043bd302b", 00:20:18.798 "is_configured": true, 00:20:18.798 "data_offset": 2048, 00:20:18.798 "data_size": 63488 00:20:18.798 }, 00:20:18.798 { 00:20:18.798 "name": "BaseBdev2", 00:20:18.798 "uuid": "23548a78-2079-40eb-ba5c-ca55d857a098", 00:20:18.798 "is_configured": true, 00:20:18.798 "data_offset": 2048, 00:20:18.798 "data_size": 63488 00:20:18.798 }, 00:20:18.798 { 00:20:18.798 "name": "BaseBdev3", 00:20:18.798 "uuid": "fcc8942a-d12c-4902-95a7-0b90fa1a3a5e", 00:20:18.798 "is_configured": true, 00:20:18.798 "data_offset": 2048, 00:20:18.798 "data_size": 63488 00:20:18.798 } 00:20:18.798 ] 00:20:18.798 } 00:20:18.798 } 00:20:18.798 }' 00:20:18.798 12:19:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:18.798 12:19:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:20:18.798 BaseBdev2 00:20:18.798 BaseBdev3' 00:20:18.798 12:19:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:19.056 12:19:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:19.056 12:19:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:19.056 12:19:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:19.056 12:19:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:20:19.056 12:19:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.056 12:19:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:19.056 12:19:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.056 12:19:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:19.056 12:19:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:19.056 12:19:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:19.056 12:19:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:19.056 12:19:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.056 12:19:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:19.056 12:19:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:19.057 12:19:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.057 12:19:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:19.057 12:19:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:19.057 12:19:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:19.057 12:19:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:19.057 12:19:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:19.057 12:19:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.057 12:19:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:19.057 12:19:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.057 12:19:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:19.057 12:19:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:19.057 12:19:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:19.057 12:19:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.057 12:19:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:19.057 [2024-11-25 12:19:15.101262] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:19.057 [2024-11-25 12:19:15.101299] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:19.057 [2024-11-25 12:19:15.101404] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:19.057 [2024-11-25 12:19:15.101750] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:19.057 [2024-11-25 12:19:15.101782] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:20:19.057 12:19:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.057 12:19:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80858 00:20:19.057 12:19:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 80858 ']' 00:20:19.057 12:19:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 80858 00:20:19.057 12:19:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:20:19.057 12:19:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:19.057 12:19:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80858 00:20:19.057 12:19:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:19.057 12:19:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:19.057 killing process with pid 80858 00:20:19.057 12:19:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80858' 00:20:19.057 12:19:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 80858 00:20:19.057 [2024-11-25 12:19:15.139656] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:19.057 12:19:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 80858 00:20:19.625 [2024-11-25 12:19:15.405995] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:20.560 12:19:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:20:20.560 00:20:20.560 real 0m11.657s 00:20:20.560 user 0m19.265s 00:20:20.560 sys 0m1.636s 00:20:20.560 12:19:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:20.560 12:19:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:20.560 ************************************ 00:20:20.560 END TEST raid5f_state_function_test_sb 00:20:20.560 ************************************ 00:20:20.560 12:19:16 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:20:20.560 12:19:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:20.560 12:19:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:20.560 12:19:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:20.560 ************************************ 00:20:20.560 START TEST raid5f_superblock_test 00:20:20.560 ************************************ 00:20:20.560 12:19:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:20:20.560 12:19:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:20:20.560 12:19:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:20:20.560 12:19:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:20:20.560 12:19:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:20:20.560 12:19:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:20:20.560 12:19:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:20:20.560 12:19:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:20:20.560 12:19:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:20:20.560 12:19:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:20:20.560 12:19:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:20:20.560 12:19:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:20:20.560 12:19:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:20:20.560 12:19:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:20:20.560 12:19:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:20:20.560 12:19:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:20:20.560 12:19:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:20:20.560 12:19:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81486 00:20:20.560 12:19:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81486 00:20:20.560 12:19:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 81486 ']' 00:20:20.560 12:19:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:20.560 12:19:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:20.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:20.560 12:19:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:20.560 12:19:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:20.560 12:19:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:20.560 12:19:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:20:20.819 [2024-11-25 12:19:16.675038] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:20:20.819 [2024-11-25 12:19:16.675213] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81486 ] 00:20:20.819 [2024-11-25 12:19:16.862932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.077 [2024-11-25 12:19:17.013535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:21.336 [2024-11-25 12:19:17.236095] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:21.336 [2024-11-25 12:19:17.236236] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:21.595 12:19:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:21.595 12:19:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:20:21.595 12:19:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:20:21.595 12:19:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:21.595 12:19:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:20:21.595 12:19:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:20:21.595 12:19:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:21.595 12:19:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:21.595 12:19:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:21.595 12:19:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:21.595 12:19:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:20:21.595 12:19:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.595 12:19:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.854 malloc1 00:20:21.854 12:19:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.854 12:19:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:21.854 12:19:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.854 12:19:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.854 [2024-11-25 12:19:17.699013] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:21.854 [2024-11-25 12:19:17.699089] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:21.854 [2024-11-25 12:19:17.699127] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:21.854 [2024-11-25 12:19:17.699144] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:21.854 [2024-11-25 12:19:17.701949] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:21.854 [2024-11-25 12:19:17.701991] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:21.854 pt1 00:20:21.854 12:19:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.854 12:19:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:21.854 12:19:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:21.854 12:19:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:20:21.854 12:19:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:20:21.854 12:19:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:21.854 12:19:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:21.854 12:19:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:21.854 12:19:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:21.854 12:19:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:20:21.854 12:19:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.854 12:19:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.854 malloc2 00:20:21.854 12:19:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.854 12:19:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:21.854 12:19:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.854 12:19:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.854 [2024-11-25 12:19:17.747314] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:21.854 [2024-11-25 12:19:17.747399] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:21.854 [2024-11-25 12:19:17.747432] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:21.854 [2024-11-25 12:19:17.747448] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:21.854 [2024-11-25 12:19:17.750235] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:21.854 [2024-11-25 12:19:17.750281] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:21.854 pt2 00:20:21.854 12:19:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.854 12:19:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:21.854 12:19:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:21.854 12:19:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:20:21.854 12:19:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:20:21.854 12:19:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:20:21.854 12:19:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:21.854 12:19:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:21.854 12:19:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:21.854 12:19:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:20:21.854 12:19:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.854 12:19:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.854 malloc3 00:20:21.854 12:19:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.854 12:19:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:21.854 12:19:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.854 12:19:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.854 [2024-11-25 12:19:17.815836] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:21.854 [2024-11-25 12:19:17.815897] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:21.854 [2024-11-25 12:19:17.815933] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:21.854 [2024-11-25 12:19:17.815950] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:21.854 [2024-11-25 12:19:17.818796] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:21.854 [2024-11-25 12:19:17.818839] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:21.854 pt3 00:20:21.854 12:19:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.854 12:19:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:21.854 12:19:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:21.854 12:19:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:20:21.854 12:19:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.854 12:19:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.854 [2024-11-25 12:19:17.823922] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:21.854 [2024-11-25 12:19:17.826478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:21.854 [2024-11-25 12:19:17.826580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:21.854 [2024-11-25 12:19:17.826816] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:21.854 [2024-11-25 12:19:17.826858] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:20:21.854 [2024-11-25 12:19:17.827172] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:21.854 [2024-11-25 12:19:17.832792] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:21.854 [2024-11-25 12:19:17.832825] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:21.854 [2024-11-25 12:19:17.833117] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:21.854 12:19:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.854 12:19:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:21.854 12:19:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:21.854 12:19:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:21.854 12:19:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:21.854 12:19:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:21.854 12:19:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:21.854 12:19:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:21.854 12:19:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:21.855 12:19:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:21.855 12:19:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:21.855 12:19:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.855 12:19:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.855 12:19:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:21.855 12:19:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.855 12:19:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.855 12:19:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:21.855 "name": "raid_bdev1", 00:20:21.855 "uuid": "d7975dc9-5d1f-4610-b2e2-4fcf8b315800", 00:20:21.855 "strip_size_kb": 64, 00:20:21.855 "state": "online", 00:20:21.855 "raid_level": "raid5f", 00:20:21.855 "superblock": true, 00:20:21.855 "num_base_bdevs": 3, 00:20:21.855 "num_base_bdevs_discovered": 3, 00:20:21.855 "num_base_bdevs_operational": 3, 00:20:21.855 "base_bdevs_list": [ 00:20:21.855 { 00:20:21.855 "name": "pt1", 00:20:21.855 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:21.855 "is_configured": true, 00:20:21.855 "data_offset": 2048, 00:20:21.855 "data_size": 63488 00:20:21.855 }, 00:20:21.855 { 00:20:21.855 "name": "pt2", 00:20:21.855 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:21.855 "is_configured": true, 00:20:21.855 "data_offset": 2048, 00:20:21.855 "data_size": 63488 00:20:21.855 }, 00:20:21.855 { 00:20:21.855 "name": "pt3", 00:20:21.855 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:21.855 "is_configured": true, 00:20:21.855 "data_offset": 2048, 00:20:21.855 "data_size": 63488 00:20:21.855 } 00:20:21.855 ] 00:20:21.855 }' 00:20:21.855 12:19:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:21.855 12:19:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.422 12:19:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:20:22.422 12:19:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:22.422 12:19:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:22.422 12:19:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:22.422 12:19:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:22.422 12:19:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:22.422 12:19:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:22.422 12:19:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.422 12:19:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:22.422 12:19:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.422 [2024-11-25 12:19:18.339411] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:22.422 12:19:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.422 12:19:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:22.422 "name": "raid_bdev1", 00:20:22.422 "aliases": [ 00:20:22.422 "d7975dc9-5d1f-4610-b2e2-4fcf8b315800" 00:20:22.422 ], 00:20:22.422 "product_name": "Raid Volume", 00:20:22.422 "block_size": 512, 00:20:22.422 "num_blocks": 126976, 00:20:22.422 "uuid": "d7975dc9-5d1f-4610-b2e2-4fcf8b315800", 00:20:22.422 "assigned_rate_limits": { 00:20:22.422 "rw_ios_per_sec": 0, 00:20:22.422 "rw_mbytes_per_sec": 0, 00:20:22.422 "r_mbytes_per_sec": 0, 00:20:22.422 "w_mbytes_per_sec": 0 00:20:22.422 }, 00:20:22.422 "claimed": false, 00:20:22.422 "zoned": false, 00:20:22.422 "supported_io_types": { 00:20:22.422 "read": true, 00:20:22.422 "write": true, 00:20:22.422 "unmap": false, 00:20:22.422 "flush": false, 00:20:22.422 "reset": true, 00:20:22.422 "nvme_admin": false, 00:20:22.422 "nvme_io": false, 00:20:22.422 "nvme_io_md": false, 00:20:22.422 "write_zeroes": true, 00:20:22.422 "zcopy": false, 00:20:22.422 "get_zone_info": false, 00:20:22.422 "zone_management": false, 00:20:22.422 "zone_append": false, 00:20:22.422 "compare": false, 00:20:22.422 "compare_and_write": false, 00:20:22.422 "abort": false, 00:20:22.422 "seek_hole": false, 00:20:22.422 "seek_data": false, 00:20:22.422 "copy": false, 00:20:22.422 "nvme_iov_md": false 00:20:22.422 }, 00:20:22.422 "driver_specific": { 00:20:22.422 "raid": { 00:20:22.422 "uuid": "d7975dc9-5d1f-4610-b2e2-4fcf8b315800", 00:20:22.422 "strip_size_kb": 64, 00:20:22.422 "state": "online", 00:20:22.422 "raid_level": "raid5f", 00:20:22.422 "superblock": true, 00:20:22.422 "num_base_bdevs": 3, 00:20:22.422 "num_base_bdevs_discovered": 3, 00:20:22.422 "num_base_bdevs_operational": 3, 00:20:22.422 "base_bdevs_list": [ 00:20:22.422 { 00:20:22.422 "name": "pt1", 00:20:22.422 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:22.422 "is_configured": true, 00:20:22.422 "data_offset": 2048, 00:20:22.422 "data_size": 63488 00:20:22.422 }, 00:20:22.422 { 00:20:22.422 "name": "pt2", 00:20:22.422 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:22.422 "is_configured": true, 00:20:22.422 "data_offset": 2048, 00:20:22.422 "data_size": 63488 00:20:22.422 }, 00:20:22.422 { 00:20:22.422 "name": "pt3", 00:20:22.422 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:22.422 "is_configured": true, 00:20:22.422 "data_offset": 2048, 00:20:22.422 "data_size": 63488 00:20:22.422 } 00:20:22.422 ] 00:20:22.422 } 00:20:22.422 } 00:20:22.422 }' 00:20:22.422 12:19:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:22.422 12:19:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:22.422 pt2 00:20:22.422 pt3' 00:20:22.422 12:19:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:22.422 12:19:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:22.422 12:19:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:22.422 12:19:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:22.422 12:19:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.422 12:19:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:22.422 12:19:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.681 12:19:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.681 12:19:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:22.681 12:19:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:22.681 12:19:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:22.681 12:19:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:22.681 12:19:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:22.681 12:19:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.681 12:19:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.681 12:19:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.681 12:19:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:22.681 12:19:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:22.681 12:19:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:22.681 12:19:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:20:22.681 12:19:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.681 12:19:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.681 12:19:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:22.681 12:19:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.681 12:19:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:22.681 12:19:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:22.681 12:19:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:22.681 12:19:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.681 12:19:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.681 12:19:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:20:22.681 [2024-11-25 12:19:18.679467] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:22.681 12:19:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.681 12:19:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d7975dc9-5d1f-4610-b2e2-4fcf8b315800 00:20:22.681 12:19:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z d7975dc9-5d1f-4610-b2e2-4fcf8b315800 ']' 00:20:22.681 12:19:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:22.681 12:19:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.681 12:19:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.681 [2024-11-25 12:19:18.731225] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:22.681 [2024-11-25 12:19:18.731269] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:22.681 [2024-11-25 12:19:18.731399] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:22.681 [2024-11-25 12:19:18.731507] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:22.681 [2024-11-25 12:19:18.731526] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:22.681 12:19:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.681 12:19:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.681 12:19:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:20:22.681 12:19:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.681 12:19:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.681 12:19:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.942 12:19:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:20:22.942 12:19:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:20:22.942 12:19:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:22.942 12:19:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:20:22.942 12:19:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.942 12:19:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.942 12:19:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.942 12:19:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:22.942 12:19:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:20:22.942 12:19:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.942 12:19:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.942 12:19:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.942 12:19:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:22.942 12:19:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:20:22.942 12:19:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.942 12:19:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.942 12:19:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.942 12:19:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:22.942 12:19:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:20:22.942 12:19:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.942 12:19:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.942 12:19:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.942 12:19:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:20:22.942 12:19:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:20:22.942 12:19:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:20:22.942 12:19:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:20:22.942 12:19:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:22.942 12:19:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:22.942 12:19:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:22.942 12:19:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:22.942 12:19:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:20:22.942 12:19:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.942 12:19:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.942 [2024-11-25 12:19:18.875365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:22.942 [2024-11-25 12:19:18.877934] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:22.942 [2024-11-25 12:19:18.878023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:20:22.942 [2024-11-25 12:19:18.878108] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:22.942 [2024-11-25 12:19:18.878182] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:22.942 [2024-11-25 12:19:18.878216] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:20:22.942 [2024-11-25 12:19:18.878260] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:22.942 [2024-11-25 12:19:18.878275] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:20:22.942 request: 00:20:22.942 { 00:20:22.942 "name": "raid_bdev1", 00:20:22.942 "raid_level": "raid5f", 00:20:22.942 "base_bdevs": [ 00:20:22.942 "malloc1", 00:20:22.942 "malloc2", 00:20:22.942 "malloc3" 00:20:22.942 ], 00:20:22.942 "strip_size_kb": 64, 00:20:22.942 "superblock": false, 00:20:22.942 "method": "bdev_raid_create", 00:20:22.942 "req_id": 1 00:20:22.942 } 00:20:22.942 Got JSON-RPC error response 00:20:22.942 response: 00:20:22.942 { 00:20:22.942 "code": -17, 00:20:22.942 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:22.942 } 00:20:22.942 12:19:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:22.942 12:19:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:20:22.942 12:19:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:22.942 12:19:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:22.942 12:19:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:22.942 12:19:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.942 12:19:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.942 12:19:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:20:22.942 12:19:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.942 12:19:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.942 12:19:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:20:22.942 12:19:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:20:22.942 12:19:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:22.942 12:19:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.942 12:19:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.942 [2024-11-25 12:19:18.943276] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:22.942 [2024-11-25 12:19:18.943411] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:22.942 [2024-11-25 12:19:18.943450] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:20:22.942 [2024-11-25 12:19:18.943466] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:22.942 [2024-11-25 12:19:18.946515] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:22.942 [2024-11-25 12:19:18.946558] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:22.942 [2024-11-25 12:19:18.946675] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:22.942 [2024-11-25 12:19:18.946749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:22.942 pt1 00:20:22.943 12:19:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.943 12:19:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:20:22.943 12:19:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:22.943 12:19:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:22.943 12:19:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:22.943 12:19:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:22.943 12:19:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:22.943 12:19:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:22.943 12:19:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:22.943 12:19:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:22.943 12:19:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:22.943 12:19:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.943 12:19:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.943 12:19:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.943 12:19:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:22.943 12:19:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.943 12:19:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:22.943 "name": "raid_bdev1", 00:20:22.943 "uuid": "d7975dc9-5d1f-4610-b2e2-4fcf8b315800", 00:20:22.943 "strip_size_kb": 64, 00:20:22.943 "state": "configuring", 00:20:22.943 "raid_level": "raid5f", 00:20:22.943 "superblock": true, 00:20:22.943 "num_base_bdevs": 3, 00:20:22.943 "num_base_bdevs_discovered": 1, 00:20:22.943 "num_base_bdevs_operational": 3, 00:20:22.943 "base_bdevs_list": [ 00:20:22.943 { 00:20:22.943 "name": "pt1", 00:20:22.943 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:22.943 "is_configured": true, 00:20:22.943 "data_offset": 2048, 00:20:22.943 "data_size": 63488 00:20:22.943 }, 00:20:22.943 { 00:20:22.943 "name": null, 00:20:22.943 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:22.943 "is_configured": false, 00:20:22.943 "data_offset": 2048, 00:20:22.943 "data_size": 63488 00:20:22.943 }, 00:20:22.943 { 00:20:22.943 "name": null, 00:20:22.943 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:22.943 "is_configured": false, 00:20:22.943 "data_offset": 2048, 00:20:22.943 "data_size": 63488 00:20:22.943 } 00:20:22.943 ] 00:20:22.943 }' 00:20:22.943 12:19:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:22.943 12:19:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.510 12:19:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:20:23.510 12:19:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:23.510 12:19:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.510 12:19:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.510 [2024-11-25 12:19:19.459454] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:23.510 [2024-11-25 12:19:19.459532] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:23.510 [2024-11-25 12:19:19.459568] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:20:23.510 [2024-11-25 12:19:19.459584] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:23.510 [2024-11-25 12:19:19.460150] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:23.510 [2024-11-25 12:19:19.460200] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:23.510 [2024-11-25 12:19:19.460313] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:23.510 [2024-11-25 12:19:19.460363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:23.510 pt2 00:20:23.510 12:19:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.510 12:19:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:20:23.510 12:19:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.510 12:19:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.510 [2024-11-25 12:19:19.467459] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:20:23.510 12:19:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.510 12:19:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:20:23.510 12:19:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:23.510 12:19:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:23.510 12:19:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:23.510 12:19:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:23.510 12:19:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:23.510 12:19:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:23.510 12:19:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:23.510 12:19:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:23.510 12:19:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:23.510 12:19:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:23.510 12:19:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.510 12:19:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.510 12:19:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.510 12:19:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.510 12:19:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:23.510 "name": "raid_bdev1", 00:20:23.510 "uuid": "d7975dc9-5d1f-4610-b2e2-4fcf8b315800", 00:20:23.510 "strip_size_kb": 64, 00:20:23.510 "state": "configuring", 00:20:23.510 "raid_level": "raid5f", 00:20:23.510 "superblock": true, 00:20:23.510 "num_base_bdevs": 3, 00:20:23.510 "num_base_bdevs_discovered": 1, 00:20:23.510 "num_base_bdevs_operational": 3, 00:20:23.510 "base_bdevs_list": [ 00:20:23.510 { 00:20:23.510 "name": "pt1", 00:20:23.510 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:23.510 "is_configured": true, 00:20:23.510 "data_offset": 2048, 00:20:23.510 "data_size": 63488 00:20:23.510 }, 00:20:23.510 { 00:20:23.510 "name": null, 00:20:23.510 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:23.510 "is_configured": false, 00:20:23.511 "data_offset": 0, 00:20:23.511 "data_size": 63488 00:20:23.511 }, 00:20:23.511 { 00:20:23.511 "name": null, 00:20:23.511 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:23.511 "is_configured": false, 00:20:23.511 "data_offset": 2048, 00:20:23.511 "data_size": 63488 00:20:23.511 } 00:20:23.511 ] 00:20:23.511 }' 00:20:23.511 12:19:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:23.511 12:19:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.078 12:19:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:20:24.078 12:19:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:24.078 12:19:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:24.078 12:19:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.078 12:19:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.078 [2024-11-25 12:19:19.987605] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:24.078 [2024-11-25 12:19:19.987690] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:24.078 [2024-11-25 12:19:19.987717] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:20:24.078 [2024-11-25 12:19:19.987734] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:24.078 [2024-11-25 12:19:19.988309] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:24.078 [2024-11-25 12:19:19.988363] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:24.078 [2024-11-25 12:19:19.988467] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:24.078 [2024-11-25 12:19:19.988507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:24.078 pt2 00:20:24.078 12:19:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.078 12:19:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:24.078 12:19:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:24.078 12:19:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:24.078 12:19:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.078 12:19:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.078 [2024-11-25 12:19:19.999578] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:24.078 [2024-11-25 12:19:19.999635] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:24.078 [2024-11-25 12:19:19.999670] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:20:24.078 [2024-11-25 12:19:19.999698] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:24.078 [2024-11-25 12:19:20.000137] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:24.078 [2024-11-25 12:19:20.000186] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:24.078 [2024-11-25 12:19:20.000265] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:20:24.078 [2024-11-25 12:19:20.000305] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:24.078 [2024-11-25 12:19:20.000476] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:24.078 [2024-11-25 12:19:20.000500] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:20:24.078 [2024-11-25 12:19:20.000804] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:24.078 [2024-11-25 12:19:20.005749] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:24.078 [2024-11-25 12:19:20.005781] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:20:24.078 [2024-11-25 12:19:20.006005] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:24.078 pt3 00:20:24.078 12:19:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.078 12:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:24.078 12:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:24.078 12:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:24.078 12:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:24.078 12:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:24.078 12:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:24.078 12:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:24.078 12:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:24.078 12:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:24.078 12:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:24.078 12:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:24.078 12:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:24.078 12:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:24.078 12:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:24.078 12:19:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.078 12:19:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.079 12:19:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.079 12:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:24.079 "name": "raid_bdev1", 00:20:24.079 "uuid": "d7975dc9-5d1f-4610-b2e2-4fcf8b315800", 00:20:24.079 "strip_size_kb": 64, 00:20:24.079 "state": "online", 00:20:24.079 "raid_level": "raid5f", 00:20:24.079 "superblock": true, 00:20:24.079 "num_base_bdevs": 3, 00:20:24.079 "num_base_bdevs_discovered": 3, 00:20:24.079 "num_base_bdevs_operational": 3, 00:20:24.079 "base_bdevs_list": [ 00:20:24.079 { 00:20:24.079 "name": "pt1", 00:20:24.079 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:24.079 "is_configured": true, 00:20:24.079 "data_offset": 2048, 00:20:24.079 "data_size": 63488 00:20:24.079 }, 00:20:24.079 { 00:20:24.079 "name": "pt2", 00:20:24.079 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:24.079 "is_configured": true, 00:20:24.079 "data_offset": 2048, 00:20:24.079 "data_size": 63488 00:20:24.079 }, 00:20:24.079 { 00:20:24.079 "name": "pt3", 00:20:24.079 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:24.079 "is_configured": true, 00:20:24.079 "data_offset": 2048, 00:20:24.079 "data_size": 63488 00:20:24.079 } 00:20:24.079 ] 00:20:24.079 }' 00:20:24.079 12:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:24.079 12:19:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.646 12:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:20:24.646 12:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:24.646 12:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:24.646 12:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:24.646 12:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:24.646 12:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:24.646 12:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:24.646 12:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:24.646 12:19:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.646 12:19:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.646 [2024-11-25 12:19:20.520018] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:24.646 12:19:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.646 12:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:24.646 "name": "raid_bdev1", 00:20:24.646 "aliases": [ 00:20:24.646 "d7975dc9-5d1f-4610-b2e2-4fcf8b315800" 00:20:24.646 ], 00:20:24.646 "product_name": "Raid Volume", 00:20:24.646 "block_size": 512, 00:20:24.646 "num_blocks": 126976, 00:20:24.646 "uuid": "d7975dc9-5d1f-4610-b2e2-4fcf8b315800", 00:20:24.646 "assigned_rate_limits": { 00:20:24.646 "rw_ios_per_sec": 0, 00:20:24.646 "rw_mbytes_per_sec": 0, 00:20:24.646 "r_mbytes_per_sec": 0, 00:20:24.646 "w_mbytes_per_sec": 0 00:20:24.646 }, 00:20:24.646 "claimed": false, 00:20:24.646 "zoned": false, 00:20:24.646 "supported_io_types": { 00:20:24.646 "read": true, 00:20:24.646 "write": true, 00:20:24.646 "unmap": false, 00:20:24.646 "flush": false, 00:20:24.646 "reset": true, 00:20:24.646 "nvme_admin": false, 00:20:24.646 "nvme_io": false, 00:20:24.646 "nvme_io_md": false, 00:20:24.646 "write_zeroes": true, 00:20:24.646 "zcopy": false, 00:20:24.646 "get_zone_info": false, 00:20:24.646 "zone_management": false, 00:20:24.646 "zone_append": false, 00:20:24.646 "compare": false, 00:20:24.646 "compare_and_write": false, 00:20:24.646 "abort": false, 00:20:24.646 "seek_hole": false, 00:20:24.646 "seek_data": false, 00:20:24.646 "copy": false, 00:20:24.646 "nvme_iov_md": false 00:20:24.646 }, 00:20:24.646 "driver_specific": { 00:20:24.646 "raid": { 00:20:24.646 "uuid": "d7975dc9-5d1f-4610-b2e2-4fcf8b315800", 00:20:24.646 "strip_size_kb": 64, 00:20:24.646 "state": "online", 00:20:24.646 "raid_level": "raid5f", 00:20:24.646 "superblock": true, 00:20:24.646 "num_base_bdevs": 3, 00:20:24.646 "num_base_bdevs_discovered": 3, 00:20:24.646 "num_base_bdevs_operational": 3, 00:20:24.646 "base_bdevs_list": [ 00:20:24.646 { 00:20:24.647 "name": "pt1", 00:20:24.647 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:24.647 "is_configured": true, 00:20:24.647 "data_offset": 2048, 00:20:24.647 "data_size": 63488 00:20:24.647 }, 00:20:24.647 { 00:20:24.647 "name": "pt2", 00:20:24.647 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:24.647 "is_configured": true, 00:20:24.647 "data_offset": 2048, 00:20:24.647 "data_size": 63488 00:20:24.647 }, 00:20:24.647 { 00:20:24.647 "name": "pt3", 00:20:24.647 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:24.647 "is_configured": true, 00:20:24.647 "data_offset": 2048, 00:20:24.647 "data_size": 63488 00:20:24.647 } 00:20:24.647 ] 00:20:24.647 } 00:20:24.647 } 00:20:24.647 }' 00:20:24.647 12:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:24.647 12:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:24.647 pt2 00:20:24.647 pt3' 00:20:24.647 12:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:24.647 12:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:24.647 12:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:24.647 12:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:24.647 12:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:24.647 12:19:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.647 12:19:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.647 12:19:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.647 12:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:24.647 12:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:24.647 12:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:24.647 12:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:24.647 12:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:24.647 12:19:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.647 12:19:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.905 12:19:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.905 12:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:24.905 12:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:24.905 12:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:24.905 12:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:24.905 12:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:20:24.905 12:19:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.905 12:19:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.905 12:19:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.906 12:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:24.906 12:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:24.906 12:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:24.906 12:19:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.906 12:19:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.906 12:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:20:24.906 [2024-11-25 12:19:20.824076] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:24.906 12:19:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.906 12:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' d7975dc9-5d1f-4610-b2e2-4fcf8b315800 '!=' d7975dc9-5d1f-4610-b2e2-4fcf8b315800 ']' 00:20:24.906 12:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:20:24.906 12:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:24.906 12:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:20:24.906 12:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:20:24.906 12:19:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.906 12:19:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.906 [2024-11-25 12:19:20.884100] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:20:24.906 12:19:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.906 12:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:20:24.906 12:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:24.906 12:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:24.906 12:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:24.906 12:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:24.906 12:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:24.906 12:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:24.906 12:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:24.906 12:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:24.906 12:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:24.906 12:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:24.906 12:19:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.906 12:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:24.906 12:19:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.906 12:19:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.906 12:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:24.906 "name": "raid_bdev1", 00:20:24.906 "uuid": "d7975dc9-5d1f-4610-b2e2-4fcf8b315800", 00:20:24.906 "strip_size_kb": 64, 00:20:24.906 "state": "online", 00:20:24.906 "raid_level": "raid5f", 00:20:24.906 "superblock": true, 00:20:24.906 "num_base_bdevs": 3, 00:20:24.906 "num_base_bdevs_discovered": 2, 00:20:24.906 "num_base_bdevs_operational": 2, 00:20:24.906 "base_bdevs_list": [ 00:20:24.906 { 00:20:24.906 "name": null, 00:20:24.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.906 "is_configured": false, 00:20:24.906 "data_offset": 0, 00:20:24.906 "data_size": 63488 00:20:24.906 }, 00:20:24.906 { 00:20:24.906 "name": "pt2", 00:20:24.906 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:24.906 "is_configured": true, 00:20:24.906 "data_offset": 2048, 00:20:24.906 "data_size": 63488 00:20:24.906 }, 00:20:24.906 { 00:20:24.906 "name": "pt3", 00:20:24.906 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:24.906 "is_configured": true, 00:20:24.906 "data_offset": 2048, 00:20:24.906 "data_size": 63488 00:20:24.906 } 00:20:24.906 ] 00:20:24.906 }' 00:20:24.906 12:19:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:24.906 12:19:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.473 12:19:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:25.473 12:19:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.473 12:19:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.473 [2024-11-25 12:19:21.456171] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:25.473 [2024-11-25 12:19:21.456213] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:25.473 [2024-11-25 12:19:21.456333] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:25.473 [2024-11-25 12:19:21.456434] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:25.473 [2024-11-25 12:19:21.456459] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:20:25.473 12:19:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.473 12:19:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:20:25.473 12:19:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:25.473 12:19:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.473 12:19:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.473 12:19:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.473 12:19:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:20:25.473 12:19:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:20:25.473 12:19:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:20:25.473 12:19:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:25.473 12:19:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:20:25.473 12:19:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.473 12:19:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.473 12:19:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.473 12:19:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:20:25.473 12:19:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:25.473 12:19:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:20:25.473 12:19:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.473 12:19:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.473 12:19:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.473 12:19:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:20:25.473 12:19:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:25.473 12:19:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:20:25.473 12:19:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:20:25.473 12:19:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:25.473 12:19:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.473 12:19:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.473 [2024-11-25 12:19:21.544152] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:25.473 [2024-11-25 12:19:21.544229] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:25.473 [2024-11-25 12:19:21.544254] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:20:25.473 [2024-11-25 12:19:21.544271] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:25.473 [2024-11-25 12:19:21.547179] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:25.473 [2024-11-25 12:19:21.547230] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:25.473 [2024-11-25 12:19:21.547328] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:25.473 [2024-11-25 12:19:21.547413] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:25.473 pt2 00:20:25.473 12:19:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.473 12:19:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:20:25.473 12:19:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:25.473 12:19:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:25.473 12:19:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:25.474 12:19:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:25.474 12:19:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:25.474 12:19:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:25.474 12:19:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:25.474 12:19:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:25.474 12:19:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:25.474 12:19:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:25.474 12:19:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.474 12:19:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.474 12:19:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:25.737 12:19:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.737 12:19:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:25.737 "name": "raid_bdev1", 00:20:25.737 "uuid": "d7975dc9-5d1f-4610-b2e2-4fcf8b315800", 00:20:25.737 "strip_size_kb": 64, 00:20:25.737 "state": "configuring", 00:20:25.737 "raid_level": "raid5f", 00:20:25.737 "superblock": true, 00:20:25.737 "num_base_bdevs": 3, 00:20:25.737 "num_base_bdevs_discovered": 1, 00:20:25.737 "num_base_bdevs_operational": 2, 00:20:25.737 "base_bdevs_list": [ 00:20:25.737 { 00:20:25.737 "name": null, 00:20:25.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:25.737 "is_configured": false, 00:20:25.737 "data_offset": 2048, 00:20:25.737 "data_size": 63488 00:20:25.737 }, 00:20:25.737 { 00:20:25.737 "name": "pt2", 00:20:25.737 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:25.737 "is_configured": true, 00:20:25.737 "data_offset": 2048, 00:20:25.737 "data_size": 63488 00:20:25.737 }, 00:20:25.737 { 00:20:25.737 "name": null, 00:20:25.737 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:25.737 "is_configured": false, 00:20:25.737 "data_offset": 2048, 00:20:25.737 "data_size": 63488 00:20:25.737 } 00:20:25.737 ] 00:20:25.737 }' 00:20:25.737 12:19:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:25.737 12:19:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.011 12:19:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:20:26.011 12:19:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:20:26.011 12:19:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:20:26.011 12:19:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:26.011 12:19:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.011 12:19:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.011 [2024-11-25 12:19:22.088315] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:26.011 [2024-11-25 12:19:22.088411] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:26.011 [2024-11-25 12:19:22.088445] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:20:26.011 [2024-11-25 12:19:22.088463] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:26.011 [2024-11-25 12:19:22.089050] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:26.011 [2024-11-25 12:19:22.089098] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:26.011 [2024-11-25 12:19:22.089206] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:20:26.011 [2024-11-25 12:19:22.089269] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:26.011 [2024-11-25 12:19:22.089445] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:26.011 [2024-11-25 12:19:22.089469] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:20:26.011 [2024-11-25 12:19:22.089780] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:26.011 [2024-11-25 12:19:22.094929] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:26.011 [2024-11-25 12:19:22.094961] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:20:26.011 [2024-11-25 12:19:22.095321] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:26.011 pt3 00:20:26.011 12:19:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.011 12:19:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:20:26.011 12:19:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:26.011 12:19:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:26.011 12:19:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:26.011 12:19:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:26.011 12:19:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:26.011 12:19:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:26.011 12:19:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:26.011 12:19:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:26.011 12:19:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:26.269 12:19:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:26.269 12:19:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.269 12:19:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:26.269 12:19:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.269 12:19:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.269 12:19:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:26.269 "name": "raid_bdev1", 00:20:26.269 "uuid": "d7975dc9-5d1f-4610-b2e2-4fcf8b315800", 00:20:26.269 "strip_size_kb": 64, 00:20:26.269 "state": "online", 00:20:26.269 "raid_level": "raid5f", 00:20:26.269 "superblock": true, 00:20:26.269 "num_base_bdevs": 3, 00:20:26.269 "num_base_bdevs_discovered": 2, 00:20:26.269 "num_base_bdevs_operational": 2, 00:20:26.269 "base_bdevs_list": [ 00:20:26.269 { 00:20:26.269 "name": null, 00:20:26.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:26.269 "is_configured": false, 00:20:26.269 "data_offset": 2048, 00:20:26.269 "data_size": 63488 00:20:26.269 }, 00:20:26.269 { 00:20:26.269 "name": "pt2", 00:20:26.269 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:26.269 "is_configured": true, 00:20:26.269 "data_offset": 2048, 00:20:26.269 "data_size": 63488 00:20:26.269 }, 00:20:26.269 { 00:20:26.269 "name": "pt3", 00:20:26.269 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:26.269 "is_configured": true, 00:20:26.269 "data_offset": 2048, 00:20:26.269 "data_size": 63488 00:20:26.269 } 00:20:26.269 ] 00:20:26.270 }' 00:20:26.270 12:19:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:26.270 12:19:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.836 12:19:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:26.836 12:19:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.836 12:19:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.836 [2024-11-25 12:19:22.629074] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:26.836 [2024-11-25 12:19:22.629120] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:26.836 [2024-11-25 12:19:22.629221] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:26.836 [2024-11-25 12:19:22.629310] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:26.836 [2024-11-25 12:19:22.629326] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:20:26.836 12:19:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.836 12:19:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:26.836 12:19:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.836 12:19:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:20:26.836 12:19:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.836 12:19:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.836 12:19:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:20:26.836 12:19:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:20:26.836 12:19:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:20:26.836 12:19:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:20:26.836 12:19:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:20:26.836 12:19:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.836 12:19:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.836 12:19:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.836 12:19:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:26.836 12:19:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.836 12:19:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.836 [2024-11-25 12:19:22.701097] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:26.836 [2024-11-25 12:19:22.701166] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:26.836 [2024-11-25 12:19:22.701200] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:20:26.836 [2024-11-25 12:19:22.701215] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:26.836 [2024-11-25 12:19:22.704225] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:26.836 [2024-11-25 12:19:22.704269] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:26.836 [2024-11-25 12:19:22.704400] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:26.836 [2024-11-25 12:19:22.704462] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:26.836 [2024-11-25 12:19:22.704629] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:20:26.836 [2024-11-25 12:19:22.704647] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:26.836 [2024-11-25 12:19:22.704671] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:20:26.836 [2024-11-25 12:19:22.704742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:26.836 pt1 00:20:26.836 12:19:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.836 12:19:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:20:26.836 12:19:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:20:26.836 12:19:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:26.836 12:19:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:26.836 12:19:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:26.836 12:19:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:26.836 12:19:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:26.836 12:19:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:26.836 12:19:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:26.836 12:19:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:26.836 12:19:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:26.836 12:19:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:26.836 12:19:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.836 12:19:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.836 12:19:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:26.836 12:19:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.836 12:19:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:26.836 "name": "raid_bdev1", 00:20:26.836 "uuid": "d7975dc9-5d1f-4610-b2e2-4fcf8b315800", 00:20:26.836 "strip_size_kb": 64, 00:20:26.836 "state": "configuring", 00:20:26.836 "raid_level": "raid5f", 00:20:26.836 "superblock": true, 00:20:26.836 "num_base_bdevs": 3, 00:20:26.836 "num_base_bdevs_discovered": 1, 00:20:26.836 "num_base_bdevs_operational": 2, 00:20:26.836 "base_bdevs_list": [ 00:20:26.836 { 00:20:26.836 "name": null, 00:20:26.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:26.836 "is_configured": false, 00:20:26.836 "data_offset": 2048, 00:20:26.836 "data_size": 63488 00:20:26.836 }, 00:20:26.836 { 00:20:26.836 "name": "pt2", 00:20:26.836 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:26.836 "is_configured": true, 00:20:26.836 "data_offset": 2048, 00:20:26.836 "data_size": 63488 00:20:26.836 }, 00:20:26.836 { 00:20:26.836 "name": null, 00:20:26.836 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:26.836 "is_configured": false, 00:20:26.836 "data_offset": 2048, 00:20:26.836 "data_size": 63488 00:20:26.836 } 00:20:26.836 ] 00:20:26.836 }' 00:20:26.836 12:19:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:26.836 12:19:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:27.404 12:19:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:20:27.404 12:19:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.404 12:19:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:27.404 12:19:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:27.404 12:19:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.404 12:19:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:20:27.404 12:19:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:27.404 12:19:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.404 12:19:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:27.404 [2024-11-25 12:19:23.289292] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:27.404 [2024-11-25 12:19:23.289387] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:27.404 [2024-11-25 12:19:23.289421] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:20:27.404 [2024-11-25 12:19:23.289436] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:27.404 [2024-11-25 12:19:23.290032] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:27.404 [2024-11-25 12:19:23.290066] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:27.404 [2024-11-25 12:19:23.290183] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:20:27.404 [2024-11-25 12:19:23.290216] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:27.404 [2024-11-25 12:19:23.290460] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:20:27.404 [2024-11-25 12:19:23.290482] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:20:27.404 [2024-11-25 12:19:23.290814] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:20:27.404 [2024-11-25 12:19:23.295924] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:20:27.404 [2024-11-25 12:19:23.295980] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:20:27.404 [2024-11-25 12:19:23.296287] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:27.404 pt3 00:20:27.404 12:19:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.405 12:19:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:20:27.405 12:19:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:27.405 12:19:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:27.405 12:19:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:27.405 12:19:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:27.405 12:19:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:27.405 12:19:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:27.405 12:19:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:27.405 12:19:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:27.405 12:19:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:27.405 12:19:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:27.405 12:19:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:27.405 12:19:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.405 12:19:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:27.405 12:19:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.405 12:19:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:27.405 "name": "raid_bdev1", 00:20:27.405 "uuid": "d7975dc9-5d1f-4610-b2e2-4fcf8b315800", 00:20:27.405 "strip_size_kb": 64, 00:20:27.405 "state": "online", 00:20:27.405 "raid_level": "raid5f", 00:20:27.405 "superblock": true, 00:20:27.405 "num_base_bdevs": 3, 00:20:27.405 "num_base_bdevs_discovered": 2, 00:20:27.405 "num_base_bdevs_operational": 2, 00:20:27.405 "base_bdevs_list": [ 00:20:27.405 { 00:20:27.405 "name": null, 00:20:27.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:27.405 "is_configured": false, 00:20:27.405 "data_offset": 2048, 00:20:27.405 "data_size": 63488 00:20:27.405 }, 00:20:27.405 { 00:20:27.405 "name": "pt2", 00:20:27.405 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:27.405 "is_configured": true, 00:20:27.405 "data_offset": 2048, 00:20:27.405 "data_size": 63488 00:20:27.405 }, 00:20:27.405 { 00:20:27.405 "name": "pt3", 00:20:27.405 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:27.405 "is_configured": true, 00:20:27.405 "data_offset": 2048, 00:20:27.405 "data_size": 63488 00:20:27.405 } 00:20:27.405 ] 00:20:27.405 }' 00:20:27.405 12:19:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:27.405 12:19:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:27.971 12:19:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:20:27.971 12:19:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.971 12:19:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:27.971 12:19:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:27.971 12:19:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.971 12:19:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:20:27.971 12:19:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:27.971 12:19:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.971 12:19:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:27.971 12:19:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:20:27.971 [2024-11-25 12:19:23.878316] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:27.971 12:19:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.971 12:19:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' d7975dc9-5d1f-4610-b2e2-4fcf8b315800 '!=' d7975dc9-5d1f-4610-b2e2-4fcf8b315800 ']' 00:20:27.971 12:19:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81486 00:20:27.971 12:19:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 81486 ']' 00:20:27.971 12:19:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 81486 00:20:27.971 12:19:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:20:27.971 12:19:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:27.971 12:19:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81486 00:20:27.971 12:19:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:27.971 12:19:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:27.971 killing process with pid 81486 00:20:27.971 12:19:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81486' 00:20:27.971 12:19:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 81486 00:20:27.971 [2024-11-25 12:19:23.961124] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:27.971 12:19:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 81486 00:20:27.971 [2024-11-25 12:19:23.961258] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:27.971 [2024-11-25 12:19:23.961357] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:27.971 [2024-11-25 12:19:23.961380] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:20:28.230 [2024-11-25 12:19:24.247259] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:29.682 12:19:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:20:29.682 00:20:29.682 real 0m8.831s 00:20:29.682 user 0m14.358s 00:20:29.682 sys 0m1.264s 00:20:29.682 12:19:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:29.682 12:19:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.682 ************************************ 00:20:29.682 END TEST raid5f_superblock_test 00:20:29.682 ************************************ 00:20:29.682 12:19:25 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:20:29.682 12:19:25 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:20:29.682 12:19:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:20:29.682 12:19:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:29.682 12:19:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:29.682 ************************************ 00:20:29.682 START TEST raid5f_rebuild_test 00:20:29.682 ************************************ 00:20:29.682 12:19:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:20:29.682 12:19:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:20:29.682 12:19:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:20:29.682 12:19:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:20:29.682 12:19:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:20:29.682 12:19:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:20:29.682 12:19:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:29.682 12:19:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:29.682 12:19:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:29.682 12:19:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:29.682 12:19:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:29.682 12:19:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:29.682 12:19:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:29.682 12:19:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:29.682 12:19:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:20:29.682 12:19:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:29.682 12:19:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:29.682 12:19:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:29.682 12:19:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:29.682 12:19:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:29.682 12:19:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:29.682 12:19:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:29.682 12:19:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:29.682 12:19:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:29.682 12:19:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:20:29.682 12:19:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:20:29.682 12:19:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:20:29.683 12:19:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:20:29.683 12:19:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:20:29.683 12:19:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81939 00:20:29.683 12:19:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81939 00:20:29.683 12:19:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 81939 ']' 00:20:29.683 12:19:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:29.683 12:19:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:29.683 12:19:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:29.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:29.683 12:19:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:29.683 12:19:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:29.683 12:19:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.683 [2024-11-25 12:19:25.588264] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:20:29.683 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:29.683 Zero copy mechanism will not be used. 00:20:29.683 [2024-11-25 12:19:25.588585] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81939 ] 00:20:29.941 [2024-11-25 12:19:25.787903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:29.941 [2024-11-25 12:19:25.961102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:30.200 [2024-11-25 12:19:26.207802] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:30.200 [2024-11-25 12:19:26.207900] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:30.767 12:19:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:30.767 12:19:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:20:30.767 12:19:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:30.767 12:19:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:30.767 12:19:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.767 12:19:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.767 BaseBdev1_malloc 00:20:30.767 12:19:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.767 12:19:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:30.767 12:19:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.767 12:19:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.767 [2024-11-25 12:19:26.686886] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:30.767 [2024-11-25 12:19:26.686972] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:30.767 [2024-11-25 12:19:26.687005] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:30.767 [2024-11-25 12:19:26.687024] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:30.767 [2024-11-25 12:19:26.690095] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:30.767 [2024-11-25 12:19:26.690142] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:30.767 BaseBdev1 00:20:30.767 12:19:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.767 12:19:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:30.767 12:19:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:30.767 12:19:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.767 12:19:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.767 BaseBdev2_malloc 00:20:30.767 12:19:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.767 12:19:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:30.767 12:19:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.767 12:19:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.767 [2024-11-25 12:19:26.747713] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:30.767 [2024-11-25 12:19:26.747789] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:30.767 [2024-11-25 12:19:26.747819] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:30.767 [2024-11-25 12:19:26.747839] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:30.767 [2024-11-25 12:19:26.750832] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:30.767 [2024-11-25 12:19:26.750877] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:30.767 BaseBdev2 00:20:30.767 12:19:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.767 12:19:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:30.767 12:19:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:30.767 12:19:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.767 12:19:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.767 BaseBdev3_malloc 00:20:30.767 12:19:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.767 12:19:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:20:30.767 12:19:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.767 12:19:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.767 [2024-11-25 12:19:26.825523] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:20:30.767 [2024-11-25 12:19:26.825603] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:30.767 [2024-11-25 12:19:26.825642] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:30.767 [2024-11-25 12:19:26.825661] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:30.767 [2024-11-25 12:19:26.828711] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:30.767 [2024-11-25 12:19:26.828763] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:30.767 BaseBdev3 00:20:30.767 12:19:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.767 12:19:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:20:30.767 12:19:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.767 12:19:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:31.026 spare_malloc 00:20:31.026 12:19:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.026 12:19:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:31.026 12:19:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.026 12:19:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:31.026 spare_delay 00:20:31.026 12:19:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.026 12:19:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:31.026 12:19:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.026 12:19:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:31.026 [2024-11-25 12:19:26.898482] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:31.026 [2024-11-25 12:19:26.898558] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:31.026 [2024-11-25 12:19:26.898591] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:20:31.026 [2024-11-25 12:19:26.898609] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:31.026 [2024-11-25 12:19:26.901683] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:31.026 [2024-11-25 12:19:26.901735] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:31.026 spare 00:20:31.026 12:19:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.026 12:19:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:20:31.026 12:19:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.026 12:19:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:31.026 [2024-11-25 12:19:26.910641] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:31.026 [2024-11-25 12:19:26.913275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:31.026 [2024-11-25 12:19:26.913420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:31.026 [2024-11-25 12:19:26.913556] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:31.026 [2024-11-25 12:19:26.913586] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:20:31.026 [2024-11-25 12:19:26.913955] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:31.026 [2024-11-25 12:19:26.919282] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:31.026 [2024-11-25 12:19:26.919323] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:31.026 [2024-11-25 12:19:26.919593] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:31.026 12:19:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.026 12:19:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:31.026 12:19:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:31.026 12:19:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:31.026 12:19:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:31.026 12:19:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:31.026 12:19:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:31.026 12:19:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:31.026 12:19:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:31.026 12:19:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:31.026 12:19:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:31.026 12:19:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:31.026 12:19:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.026 12:19:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:31.026 12:19:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:31.026 12:19:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.026 12:19:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:31.026 "name": "raid_bdev1", 00:20:31.026 "uuid": "21bae887-bec3-4ada-a2f6-4e6963f8be43", 00:20:31.026 "strip_size_kb": 64, 00:20:31.026 "state": "online", 00:20:31.026 "raid_level": "raid5f", 00:20:31.026 "superblock": false, 00:20:31.026 "num_base_bdevs": 3, 00:20:31.026 "num_base_bdevs_discovered": 3, 00:20:31.026 "num_base_bdevs_operational": 3, 00:20:31.026 "base_bdevs_list": [ 00:20:31.026 { 00:20:31.026 "name": "BaseBdev1", 00:20:31.026 "uuid": "37a84134-0c22-505c-a7c4-117e6a600314", 00:20:31.026 "is_configured": true, 00:20:31.026 "data_offset": 0, 00:20:31.026 "data_size": 65536 00:20:31.026 }, 00:20:31.026 { 00:20:31.026 "name": "BaseBdev2", 00:20:31.026 "uuid": "f14f531c-3fd1-50de-8fef-c9f916efdc58", 00:20:31.026 "is_configured": true, 00:20:31.026 "data_offset": 0, 00:20:31.026 "data_size": 65536 00:20:31.026 }, 00:20:31.026 { 00:20:31.026 "name": "BaseBdev3", 00:20:31.026 "uuid": "ef11239f-795a-5455-862b-ab430555c9e8", 00:20:31.026 "is_configured": true, 00:20:31.026 "data_offset": 0, 00:20:31.026 "data_size": 65536 00:20:31.026 } 00:20:31.026 ] 00:20:31.026 }' 00:20:31.026 12:19:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:31.026 12:19:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:31.594 12:19:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:31.594 12:19:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:31.594 12:19:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.594 12:19:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:31.594 [2024-11-25 12:19:27.422336] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:31.594 12:19:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.594 12:19:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:20:31.594 12:19:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:31.594 12:19:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:31.594 12:19:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.594 12:19:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:31.594 12:19:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.594 12:19:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:20:31.594 12:19:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:20:31.594 12:19:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:20:31.594 12:19:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:20:31.594 12:19:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:20:31.594 12:19:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:31.594 12:19:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:31.594 12:19:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:31.594 12:19:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:31.594 12:19:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:31.594 12:19:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:20:31.594 12:19:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:31.594 12:19:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:31.594 12:19:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:31.853 [2024-11-25 12:19:27.822146] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:20:31.853 /dev/nbd0 00:20:31.853 12:19:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:31.853 12:19:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:31.853 12:19:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:31.853 12:19:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:20:31.853 12:19:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:31.853 12:19:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:31.853 12:19:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:31.853 12:19:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:20:31.853 12:19:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:31.853 12:19:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:31.853 12:19:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:31.853 1+0 records in 00:20:31.853 1+0 records out 00:20:31.853 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000379373 s, 10.8 MB/s 00:20:31.853 12:19:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:31.853 12:19:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:20:31.853 12:19:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:31.853 12:19:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:31.853 12:19:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:20:31.853 12:19:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:31.853 12:19:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:31.853 12:19:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:20:31.853 12:19:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:20:31.853 12:19:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:20:31.853 12:19:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:20:32.419 512+0 records in 00:20:32.419 512+0 records out 00:20:32.419 67108864 bytes (67 MB, 64 MiB) copied, 0.523146 s, 128 MB/s 00:20:32.419 12:19:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:20:32.419 12:19:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:32.419 12:19:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:32.419 12:19:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:32.419 12:19:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:20:32.419 12:19:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:32.419 12:19:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:32.677 [2024-11-25 12:19:28.670235] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:32.677 12:19:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:32.677 12:19:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:32.677 12:19:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:32.677 12:19:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:32.677 12:19:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:32.677 12:19:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:32.677 12:19:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:20:32.677 12:19:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:20:32.677 12:19:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:32.677 12:19:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.677 12:19:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.677 [2024-11-25 12:19:28.704087] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:32.677 12:19:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.677 12:19:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:20:32.677 12:19:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:32.677 12:19:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:32.677 12:19:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:32.677 12:19:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:32.677 12:19:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:32.677 12:19:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:32.677 12:19:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:32.677 12:19:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:32.677 12:19:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:32.677 12:19:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.677 12:19:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.677 12:19:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.677 12:19:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.677 12:19:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.677 12:19:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:32.677 "name": "raid_bdev1", 00:20:32.677 "uuid": "21bae887-bec3-4ada-a2f6-4e6963f8be43", 00:20:32.677 "strip_size_kb": 64, 00:20:32.677 "state": "online", 00:20:32.677 "raid_level": "raid5f", 00:20:32.677 "superblock": false, 00:20:32.677 "num_base_bdevs": 3, 00:20:32.677 "num_base_bdevs_discovered": 2, 00:20:32.677 "num_base_bdevs_operational": 2, 00:20:32.677 "base_bdevs_list": [ 00:20:32.677 { 00:20:32.677 "name": null, 00:20:32.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:32.677 "is_configured": false, 00:20:32.677 "data_offset": 0, 00:20:32.677 "data_size": 65536 00:20:32.677 }, 00:20:32.677 { 00:20:32.677 "name": "BaseBdev2", 00:20:32.677 "uuid": "f14f531c-3fd1-50de-8fef-c9f916efdc58", 00:20:32.677 "is_configured": true, 00:20:32.677 "data_offset": 0, 00:20:32.677 "data_size": 65536 00:20:32.677 }, 00:20:32.677 { 00:20:32.677 "name": "BaseBdev3", 00:20:32.677 "uuid": "ef11239f-795a-5455-862b-ab430555c9e8", 00:20:32.677 "is_configured": true, 00:20:32.677 "data_offset": 0, 00:20:32.677 "data_size": 65536 00:20:32.677 } 00:20:32.677 ] 00:20:32.677 }' 00:20:32.677 12:19:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:32.677 12:19:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.242 12:19:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:33.242 12:19:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.242 12:19:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.242 [2024-11-25 12:19:29.228211] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:33.242 [2024-11-25 12:19:29.243726] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:20:33.242 12:19:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.242 12:19:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:33.242 [2024-11-25 12:19:29.251131] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:34.176 12:19:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:34.176 12:19:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:34.176 12:19:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:34.176 12:19:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:34.176 12:19:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:34.176 12:19:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:34.176 12:19:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.176 12:19:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.176 12:19:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:34.434 12:19:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.434 12:19:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:34.434 "name": "raid_bdev1", 00:20:34.434 "uuid": "21bae887-bec3-4ada-a2f6-4e6963f8be43", 00:20:34.434 "strip_size_kb": 64, 00:20:34.434 "state": "online", 00:20:34.434 "raid_level": "raid5f", 00:20:34.434 "superblock": false, 00:20:34.434 "num_base_bdevs": 3, 00:20:34.434 "num_base_bdevs_discovered": 3, 00:20:34.434 "num_base_bdevs_operational": 3, 00:20:34.434 "process": { 00:20:34.434 "type": "rebuild", 00:20:34.434 "target": "spare", 00:20:34.434 "progress": { 00:20:34.434 "blocks": 18432, 00:20:34.434 "percent": 14 00:20:34.434 } 00:20:34.434 }, 00:20:34.434 "base_bdevs_list": [ 00:20:34.434 { 00:20:34.434 "name": "spare", 00:20:34.434 "uuid": "1336e2fc-747e-55e0-adfd-8536de24b751", 00:20:34.434 "is_configured": true, 00:20:34.434 "data_offset": 0, 00:20:34.434 "data_size": 65536 00:20:34.434 }, 00:20:34.434 { 00:20:34.434 "name": "BaseBdev2", 00:20:34.434 "uuid": "f14f531c-3fd1-50de-8fef-c9f916efdc58", 00:20:34.434 "is_configured": true, 00:20:34.434 "data_offset": 0, 00:20:34.434 "data_size": 65536 00:20:34.434 }, 00:20:34.434 { 00:20:34.434 "name": "BaseBdev3", 00:20:34.434 "uuid": "ef11239f-795a-5455-862b-ab430555c9e8", 00:20:34.434 "is_configured": true, 00:20:34.434 "data_offset": 0, 00:20:34.434 "data_size": 65536 00:20:34.434 } 00:20:34.434 ] 00:20:34.434 }' 00:20:34.434 12:19:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:34.434 12:19:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:34.434 12:19:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:34.434 12:19:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:34.434 12:19:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:34.434 12:19:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.434 12:19:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.434 [2024-11-25 12:19:30.420957] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:34.435 [2024-11-25 12:19:30.466542] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:34.435 [2024-11-25 12:19:30.466639] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:34.435 [2024-11-25 12:19:30.466677] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:34.435 [2024-11-25 12:19:30.466690] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:34.435 12:19:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.435 12:19:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:20:34.435 12:19:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:34.435 12:19:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:34.435 12:19:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:34.435 12:19:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:34.435 12:19:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:34.435 12:19:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:34.435 12:19:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:34.435 12:19:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:34.435 12:19:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:34.435 12:19:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:34.435 12:19:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:34.435 12:19:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.435 12:19:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.435 12:19:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.694 12:19:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:34.694 "name": "raid_bdev1", 00:20:34.694 "uuid": "21bae887-bec3-4ada-a2f6-4e6963f8be43", 00:20:34.694 "strip_size_kb": 64, 00:20:34.694 "state": "online", 00:20:34.694 "raid_level": "raid5f", 00:20:34.694 "superblock": false, 00:20:34.694 "num_base_bdevs": 3, 00:20:34.694 "num_base_bdevs_discovered": 2, 00:20:34.694 "num_base_bdevs_operational": 2, 00:20:34.694 "base_bdevs_list": [ 00:20:34.694 { 00:20:34.694 "name": null, 00:20:34.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:34.694 "is_configured": false, 00:20:34.694 "data_offset": 0, 00:20:34.694 "data_size": 65536 00:20:34.694 }, 00:20:34.694 { 00:20:34.694 "name": "BaseBdev2", 00:20:34.694 "uuid": "f14f531c-3fd1-50de-8fef-c9f916efdc58", 00:20:34.694 "is_configured": true, 00:20:34.694 "data_offset": 0, 00:20:34.694 "data_size": 65536 00:20:34.694 }, 00:20:34.694 { 00:20:34.694 "name": "BaseBdev3", 00:20:34.694 "uuid": "ef11239f-795a-5455-862b-ab430555c9e8", 00:20:34.694 "is_configured": true, 00:20:34.694 "data_offset": 0, 00:20:34.694 "data_size": 65536 00:20:34.694 } 00:20:34.694 ] 00:20:34.694 }' 00:20:34.694 12:19:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:34.694 12:19:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.952 12:19:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:34.952 12:19:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:34.952 12:19:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:34.952 12:19:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:34.952 12:19:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:34.952 12:19:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:34.952 12:19:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:34.952 12:19:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.952 12:19:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.952 12:19:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.210 12:19:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:35.210 "name": "raid_bdev1", 00:20:35.210 "uuid": "21bae887-bec3-4ada-a2f6-4e6963f8be43", 00:20:35.210 "strip_size_kb": 64, 00:20:35.210 "state": "online", 00:20:35.210 "raid_level": "raid5f", 00:20:35.210 "superblock": false, 00:20:35.210 "num_base_bdevs": 3, 00:20:35.210 "num_base_bdevs_discovered": 2, 00:20:35.210 "num_base_bdevs_operational": 2, 00:20:35.210 "base_bdevs_list": [ 00:20:35.210 { 00:20:35.210 "name": null, 00:20:35.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:35.210 "is_configured": false, 00:20:35.210 "data_offset": 0, 00:20:35.210 "data_size": 65536 00:20:35.210 }, 00:20:35.210 { 00:20:35.210 "name": "BaseBdev2", 00:20:35.210 "uuid": "f14f531c-3fd1-50de-8fef-c9f916efdc58", 00:20:35.210 "is_configured": true, 00:20:35.210 "data_offset": 0, 00:20:35.210 "data_size": 65536 00:20:35.210 }, 00:20:35.210 { 00:20:35.210 "name": "BaseBdev3", 00:20:35.210 "uuid": "ef11239f-795a-5455-862b-ab430555c9e8", 00:20:35.210 "is_configured": true, 00:20:35.210 "data_offset": 0, 00:20:35.210 "data_size": 65536 00:20:35.210 } 00:20:35.210 ] 00:20:35.210 }' 00:20:35.210 12:19:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:35.210 12:19:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:35.210 12:19:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:35.210 12:19:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:35.210 12:19:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:35.210 12:19:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.210 12:19:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:35.210 [2024-11-25 12:19:31.157939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:35.210 [2024-11-25 12:19:31.172472] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:20:35.210 12:19:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.210 12:19:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:35.210 [2024-11-25 12:19:31.179792] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:36.146 12:19:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:36.146 12:19:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:36.146 12:19:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:36.146 12:19:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:36.146 12:19:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:36.146 12:19:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:36.146 12:19:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:36.146 12:19:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.146 12:19:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.146 12:19:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.146 12:19:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:36.146 "name": "raid_bdev1", 00:20:36.146 "uuid": "21bae887-bec3-4ada-a2f6-4e6963f8be43", 00:20:36.146 "strip_size_kb": 64, 00:20:36.146 "state": "online", 00:20:36.146 "raid_level": "raid5f", 00:20:36.146 "superblock": false, 00:20:36.146 "num_base_bdevs": 3, 00:20:36.146 "num_base_bdevs_discovered": 3, 00:20:36.146 "num_base_bdevs_operational": 3, 00:20:36.146 "process": { 00:20:36.146 "type": "rebuild", 00:20:36.146 "target": "spare", 00:20:36.146 "progress": { 00:20:36.146 "blocks": 18432, 00:20:36.146 "percent": 14 00:20:36.146 } 00:20:36.146 }, 00:20:36.146 "base_bdevs_list": [ 00:20:36.146 { 00:20:36.146 "name": "spare", 00:20:36.146 "uuid": "1336e2fc-747e-55e0-adfd-8536de24b751", 00:20:36.146 "is_configured": true, 00:20:36.146 "data_offset": 0, 00:20:36.146 "data_size": 65536 00:20:36.146 }, 00:20:36.146 { 00:20:36.146 "name": "BaseBdev2", 00:20:36.146 "uuid": "f14f531c-3fd1-50de-8fef-c9f916efdc58", 00:20:36.146 "is_configured": true, 00:20:36.146 "data_offset": 0, 00:20:36.146 "data_size": 65536 00:20:36.146 }, 00:20:36.146 { 00:20:36.146 "name": "BaseBdev3", 00:20:36.146 "uuid": "ef11239f-795a-5455-862b-ab430555c9e8", 00:20:36.146 "is_configured": true, 00:20:36.146 "data_offset": 0, 00:20:36.146 "data_size": 65536 00:20:36.146 } 00:20:36.146 ] 00:20:36.146 }' 00:20:36.146 12:19:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:36.406 12:19:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:36.406 12:19:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:36.406 12:19:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:36.406 12:19:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:20:36.406 12:19:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:20:36.406 12:19:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:20:36.406 12:19:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=591 00:20:36.406 12:19:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:36.406 12:19:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:36.406 12:19:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:36.406 12:19:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:36.406 12:19:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:36.406 12:19:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:36.406 12:19:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:36.406 12:19:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:36.406 12:19:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.406 12:19:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.406 12:19:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.406 12:19:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:36.406 "name": "raid_bdev1", 00:20:36.406 "uuid": "21bae887-bec3-4ada-a2f6-4e6963f8be43", 00:20:36.406 "strip_size_kb": 64, 00:20:36.406 "state": "online", 00:20:36.406 "raid_level": "raid5f", 00:20:36.406 "superblock": false, 00:20:36.406 "num_base_bdevs": 3, 00:20:36.406 "num_base_bdevs_discovered": 3, 00:20:36.406 "num_base_bdevs_operational": 3, 00:20:36.406 "process": { 00:20:36.406 "type": "rebuild", 00:20:36.406 "target": "spare", 00:20:36.406 "progress": { 00:20:36.406 "blocks": 22528, 00:20:36.406 "percent": 17 00:20:36.406 } 00:20:36.406 }, 00:20:36.406 "base_bdevs_list": [ 00:20:36.406 { 00:20:36.406 "name": "spare", 00:20:36.406 "uuid": "1336e2fc-747e-55e0-adfd-8536de24b751", 00:20:36.406 "is_configured": true, 00:20:36.406 "data_offset": 0, 00:20:36.406 "data_size": 65536 00:20:36.406 }, 00:20:36.406 { 00:20:36.406 "name": "BaseBdev2", 00:20:36.406 "uuid": "f14f531c-3fd1-50de-8fef-c9f916efdc58", 00:20:36.406 "is_configured": true, 00:20:36.406 "data_offset": 0, 00:20:36.406 "data_size": 65536 00:20:36.406 }, 00:20:36.406 { 00:20:36.406 "name": "BaseBdev3", 00:20:36.406 "uuid": "ef11239f-795a-5455-862b-ab430555c9e8", 00:20:36.406 "is_configured": true, 00:20:36.406 "data_offset": 0, 00:20:36.406 "data_size": 65536 00:20:36.406 } 00:20:36.406 ] 00:20:36.406 }' 00:20:36.406 12:19:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:36.406 12:19:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:36.406 12:19:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:36.406 12:19:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:36.406 12:19:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:37.788 12:19:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:37.788 12:19:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:37.788 12:19:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:37.788 12:19:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:37.788 12:19:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:37.788 12:19:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:37.788 12:19:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:37.788 12:19:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.788 12:19:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.788 12:19:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:37.788 12:19:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.788 12:19:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:37.788 "name": "raid_bdev1", 00:20:37.788 "uuid": "21bae887-bec3-4ada-a2f6-4e6963f8be43", 00:20:37.788 "strip_size_kb": 64, 00:20:37.788 "state": "online", 00:20:37.788 "raid_level": "raid5f", 00:20:37.788 "superblock": false, 00:20:37.788 "num_base_bdevs": 3, 00:20:37.788 "num_base_bdevs_discovered": 3, 00:20:37.788 "num_base_bdevs_operational": 3, 00:20:37.788 "process": { 00:20:37.788 "type": "rebuild", 00:20:37.788 "target": "spare", 00:20:37.788 "progress": { 00:20:37.788 "blocks": 45056, 00:20:37.788 "percent": 34 00:20:37.788 } 00:20:37.788 }, 00:20:37.788 "base_bdevs_list": [ 00:20:37.788 { 00:20:37.788 "name": "spare", 00:20:37.788 "uuid": "1336e2fc-747e-55e0-adfd-8536de24b751", 00:20:37.788 "is_configured": true, 00:20:37.788 "data_offset": 0, 00:20:37.788 "data_size": 65536 00:20:37.788 }, 00:20:37.788 { 00:20:37.788 "name": "BaseBdev2", 00:20:37.788 "uuid": "f14f531c-3fd1-50de-8fef-c9f916efdc58", 00:20:37.788 "is_configured": true, 00:20:37.788 "data_offset": 0, 00:20:37.788 "data_size": 65536 00:20:37.788 }, 00:20:37.788 { 00:20:37.788 "name": "BaseBdev3", 00:20:37.788 "uuid": "ef11239f-795a-5455-862b-ab430555c9e8", 00:20:37.788 "is_configured": true, 00:20:37.788 "data_offset": 0, 00:20:37.788 "data_size": 65536 00:20:37.788 } 00:20:37.788 ] 00:20:37.788 }' 00:20:37.788 12:19:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:37.788 12:19:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:37.788 12:19:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:37.788 12:19:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:37.788 12:19:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:38.722 12:19:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:38.722 12:19:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:38.722 12:19:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:38.722 12:19:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:38.722 12:19:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:38.722 12:19:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:38.722 12:19:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:38.722 12:19:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.722 12:19:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.722 12:19:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:38.722 12:19:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.722 12:19:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:38.722 "name": "raid_bdev1", 00:20:38.722 "uuid": "21bae887-bec3-4ada-a2f6-4e6963f8be43", 00:20:38.722 "strip_size_kb": 64, 00:20:38.722 "state": "online", 00:20:38.722 "raid_level": "raid5f", 00:20:38.722 "superblock": false, 00:20:38.722 "num_base_bdevs": 3, 00:20:38.722 "num_base_bdevs_discovered": 3, 00:20:38.722 "num_base_bdevs_operational": 3, 00:20:38.722 "process": { 00:20:38.722 "type": "rebuild", 00:20:38.722 "target": "spare", 00:20:38.722 "progress": { 00:20:38.722 "blocks": 69632, 00:20:38.722 "percent": 53 00:20:38.722 } 00:20:38.722 }, 00:20:38.722 "base_bdevs_list": [ 00:20:38.722 { 00:20:38.722 "name": "spare", 00:20:38.722 "uuid": "1336e2fc-747e-55e0-adfd-8536de24b751", 00:20:38.722 "is_configured": true, 00:20:38.722 "data_offset": 0, 00:20:38.722 "data_size": 65536 00:20:38.722 }, 00:20:38.722 { 00:20:38.722 "name": "BaseBdev2", 00:20:38.722 "uuid": "f14f531c-3fd1-50de-8fef-c9f916efdc58", 00:20:38.722 "is_configured": true, 00:20:38.722 "data_offset": 0, 00:20:38.722 "data_size": 65536 00:20:38.722 }, 00:20:38.722 { 00:20:38.722 "name": "BaseBdev3", 00:20:38.722 "uuid": "ef11239f-795a-5455-862b-ab430555c9e8", 00:20:38.722 "is_configured": true, 00:20:38.722 "data_offset": 0, 00:20:38.722 "data_size": 65536 00:20:38.722 } 00:20:38.722 ] 00:20:38.722 }' 00:20:38.722 12:19:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:38.722 12:19:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:38.722 12:19:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:38.980 12:19:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:38.980 12:19:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:39.916 12:19:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:39.916 12:19:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:39.916 12:19:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:39.916 12:19:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:39.916 12:19:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:39.916 12:19:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:39.916 12:19:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:39.916 12:19:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.916 12:19:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:39.916 12:19:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.916 12:19:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.916 12:19:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:39.916 "name": "raid_bdev1", 00:20:39.916 "uuid": "21bae887-bec3-4ada-a2f6-4e6963f8be43", 00:20:39.916 "strip_size_kb": 64, 00:20:39.916 "state": "online", 00:20:39.916 "raid_level": "raid5f", 00:20:39.916 "superblock": false, 00:20:39.916 "num_base_bdevs": 3, 00:20:39.916 "num_base_bdevs_discovered": 3, 00:20:39.916 "num_base_bdevs_operational": 3, 00:20:39.916 "process": { 00:20:39.916 "type": "rebuild", 00:20:39.916 "target": "spare", 00:20:39.916 "progress": { 00:20:39.916 "blocks": 92160, 00:20:39.916 "percent": 70 00:20:39.916 } 00:20:39.916 }, 00:20:39.916 "base_bdevs_list": [ 00:20:39.916 { 00:20:39.916 "name": "spare", 00:20:39.916 "uuid": "1336e2fc-747e-55e0-adfd-8536de24b751", 00:20:39.916 "is_configured": true, 00:20:39.916 "data_offset": 0, 00:20:39.916 "data_size": 65536 00:20:39.916 }, 00:20:39.916 { 00:20:39.916 "name": "BaseBdev2", 00:20:39.916 "uuid": "f14f531c-3fd1-50de-8fef-c9f916efdc58", 00:20:39.916 "is_configured": true, 00:20:39.916 "data_offset": 0, 00:20:39.916 "data_size": 65536 00:20:39.916 }, 00:20:39.916 { 00:20:39.916 "name": "BaseBdev3", 00:20:39.916 "uuid": "ef11239f-795a-5455-862b-ab430555c9e8", 00:20:39.916 "is_configured": true, 00:20:39.916 "data_offset": 0, 00:20:39.916 "data_size": 65536 00:20:39.916 } 00:20:39.916 ] 00:20:39.916 }' 00:20:39.916 12:19:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:39.916 12:19:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:39.916 12:19:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:39.916 12:19:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:39.916 12:19:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:41.294 12:19:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:41.294 12:19:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:41.294 12:19:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:41.294 12:19:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:41.294 12:19:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:41.294 12:19:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:41.294 12:19:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.294 12:19:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.294 12:19:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.294 12:19:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:41.294 12:19:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.294 12:19:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:41.294 "name": "raid_bdev1", 00:20:41.294 "uuid": "21bae887-bec3-4ada-a2f6-4e6963f8be43", 00:20:41.294 "strip_size_kb": 64, 00:20:41.294 "state": "online", 00:20:41.294 "raid_level": "raid5f", 00:20:41.294 "superblock": false, 00:20:41.294 "num_base_bdevs": 3, 00:20:41.294 "num_base_bdevs_discovered": 3, 00:20:41.294 "num_base_bdevs_operational": 3, 00:20:41.294 "process": { 00:20:41.294 "type": "rebuild", 00:20:41.294 "target": "spare", 00:20:41.294 "progress": { 00:20:41.294 "blocks": 116736, 00:20:41.294 "percent": 89 00:20:41.294 } 00:20:41.294 }, 00:20:41.294 "base_bdevs_list": [ 00:20:41.294 { 00:20:41.294 "name": "spare", 00:20:41.294 "uuid": "1336e2fc-747e-55e0-adfd-8536de24b751", 00:20:41.294 "is_configured": true, 00:20:41.294 "data_offset": 0, 00:20:41.294 "data_size": 65536 00:20:41.294 }, 00:20:41.294 { 00:20:41.294 "name": "BaseBdev2", 00:20:41.294 "uuid": "f14f531c-3fd1-50de-8fef-c9f916efdc58", 00:20:41.294 "is_configured": true, 00:20:41.294 "data_offset": 0, 00:20:41.294 "data_size": 65536 00:20:41.294 }, 00:20:41.294 { 00:20:41.294 "name": "BaseBdev3", 00:20:41.294 "uuid": "ef11239f-795a-5455-862b-ab430555c9e8", 00:20:41.294 "is_configured": true, 00:20:41.294 "data_offset": 0, 00:20:41.294 "data_size": 65536 00:20:41.294 } 00:20:41.294 ] 00:20:41.294 }' 00:20:41.294 12:19:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:41.294 12:19:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:41.294 12:19:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:41.294 12:19:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:41.294 12:19:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:41.862 [2024-11-25 12:19:37.662067] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:41.862 [2024-11-25 12:19:37.662251] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:41.862 [2024-11-25 12:19:37.662330] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:42.121 12:19:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:42.121 12:19:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:42.121 12:19:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:42.121 12:19:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:42.121 12:19:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:42.121 12:19:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:42.121 12:19:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:42.121 12:19:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:42.121 12:19:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.121 12:19:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.121 12:19:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.121 12:19:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:42.121 "name": "raid_bdev1", 00:20:42.121 "uuid": "21bae887-bec3-4ada-a2f6-4e6963f8be43", 00:20:42.121 "strip_size_kb": 64, 00:20:42.121 "state": "online", 00:20:42.121 "raid_level": "raid5f", 00:20:42.121 "superblock": false, 00:20:42.121 "num_base_bdevs": 3, 00:20:42.121 "num_base_bdevs_discovered": 3, 00:20:42.121 "num_base_bdevs_operational": 3, 00:20:42.121 "base_bdevs_list": [ 00:20:42.121 { 00:20:42.121 "name": "spare", 00:20:42.121 "uuid": "1336e2fc-747e-55e0-adfd-8536de24b751", 00:20:42.121 "is_configured": true, 00:20:42.121 "data_offset": 0, 00:20:42.121 "data_size": 65536 00:20:42.122 }, 00:20:42.122 { 00:20:42.122 "name": "BaseBdev2", 00:20:42.122 "uuid": "f14f531c-3fd1-50de-8fef-c9f916efdc58", 00:20:42.122 "is_configured": true, 00:20:42.122 "data_offset": 0, 00:20:42.122 "data_size": 65536 00:20:42.122 }, 00:20:42.122 { 00:20:42.122 "name": "BaseBdev3", 00:20:42.122 "uuid": "ef11239f-795a-5455-862b-ab430555c9e8", 00:20:42.122 "is_configured": true, 00:20:42.122 "data_offset": 0, 00:20:42.122 "data_size": 65536 00:20:42.122 } 00:20:42.122 ] 00:20:42.122 }' 00:20:42.122 12:19:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:42.380 12:19:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:42.380 12:19:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:42.380 12:19:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:42.380 12:19:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:20:42.380 12:19:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:42.380 12:19:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:42.380 12:19:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:42.380 12:19:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:42.380 12:19:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:42.380 12:19:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:42.380 12:19:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.380 12:19:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.380 12:19:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:42.380 12:19:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.380 12:19:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:42.380 "name": "raid_bdev1", 00:20:42.381 "uuid": "21bae887-bec3-4ada-a2f6-4e6963f8be43", 00:20:42.381 "strip_size_kb": 64, 00:20:42.381 "state": "online", 00:20:42.381 "raid_level": "raid5f", 00:20:42.381 "superblock": false, 00:20:42.381 "num_base_bdevs": 3, 00:20:42.381 "num_base_bdevs_discovered": 3, 00:20:42.381 "num_base_bdevs_operational": 3, 00:20:42.381 "base_bdevs_list": [ 00:20:42.381 { 00:20:42.381 "name": "spare", 00:20:42.381 "uuid": "1336e2fc-747e-55e0-adfd-8536de24b751", 00:20:42.381 "is_configured": true, 00:20:42.381 "data_offset": 0, 00:20:42.381 "data_size": 65536 00:20:42.381 }, 00:20:42.381 { 00:20:42.381 "name": "BaseBdev2", 00:20:42.381 "uuid": "f14f531c-3fd1-50de-8fef-c9f916efdc58", 00:20:42.381 "is_configured": true, 00:20:42.381 "data_offset": 0, 00:20:42.381 "data_size": 65536 00:20:42.381 }, 00:20:42.381 { 00:20:42.381 "name": "BaseBdev3", 00:20:42.381 "uuid": "ef11239f-795a-5455-862b-ab430555c9e8", 00:20:42.381 "is_configured": true, 00:20:42.381 "data_offset": 0, 00:20:42.381 "data_size": 65536 00:20:42.381 } 00:20:42.381 ] 00:20:42.381 }' 00:20:42.381 12:19:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:42.381 12:19:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:42.381 12:19:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:42.381 12:19:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:42.381 12:19:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:42.381 12:19:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:42.381 12:19:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:42.381 12:19:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:42.381 12:19:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:42.381 12:19:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:42.381 12:19:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:42.381 12:19:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:42.381 12:19:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:42.381 12:19:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:42.381 12:19:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:42.381 12:19:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.381 12:19:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.381 12:19:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:42.639 12:19:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.639 12:19:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:42.639 "name": "raid_bdev1", 00:20:42.639 "uuid": "21bae887-bec3-4ada-a2f6-4e6963f8be43", 00:20:42.639 "strip_size_kb": 64, 00:20:42.639 "state": "online", 00:20:42.639 "raid_level": "raid5f", 00:20:42.639 "superblock": false, 00:20:42.639 "num_base_bdevs": 3, 00:20:42.639 "num_base_bdevs_discovered": 3, 00:20:42.639 "num_base_bdevs_operational": 3, 00:20:42.639 "base_bdevs_list": [ 00:20:42.639 { 00:20:42.639 "name": "spare", 00:20:42.639 "uuid": "1336e2fc-747e-55e0-adfd-8536de24b751", 00:20:42.639 "is_configured": true, 00:20:42.639 "data_offset": 0, 00:20:42.639 "data_size": 65536 00:20:42.639 }, 00:20:42.639 { 00:20:42.639 "name": "BaseBdev2", 00:20:42.639 "uuid": "f14f531c-3fd1-50de-8fef-c9f916efdc58", 00:20:42.639 "is_configured": true, 00:20:42.639 "data_offset": 0, 00:20:42.639 "data_size": 65536 00:20:42.639 }, 00:20:42.639 { 00:20:42.639 "name": "BaseBdev3", 00:20:42.639 "uuid": "ef11239f-795a-5455-862b-ab430555c9e8", 00:20:42.639 "is_configured": true, 00:20:42.639 "data_offset": 0, 00:20:42.639 "data_size": 65536 00:20:42.639 } 00:20:42.639 ] 00:20:42.639 }' 00:20:42.639 12:19:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:42.639 12:19:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.898 12:19:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:42.898 12:19:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.898 12:19:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.898 [2024-11-25 12:19:38.985282] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:42.898 [2024-11-25 12:19:38.985333] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:42.898 [2024-11-25 12:19:38.985455] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:42.898 [2024-11-25 12:19:38.985568] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:42.898 [2024-11-25 12:19:38.985594] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:43.158 12:19:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.158 12:19:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:20:43.158 12:19:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:43.158 12:19:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.158 12:19:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:43.158 12:19:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.158 12:19:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:43.158 12:19:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:20:43.158 12:19:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:20:43.158 12:19:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:43.158 12:19:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:43.158 12:19:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:43.158 12:19:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:43.158 12:19:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:43.158 12:19:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:43.158 12:19:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:20:43.158 12:19:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:43.158 12:19:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:43.158 12:19:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:43.417 /dev/nbd0 00:20:43.417 12:19:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:43.417 12:19:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:43.417 12:19:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:43.417 12:19:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:20:43.417 12:19:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:43.417 12:19:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:43.417 12:19:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:43.417 12:19:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:20:43.417 12:19:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:43.417 12:19:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:43.417 12:19:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:43.417 1+0 records in 00:20:43.417 1+0 records out 00:20:43.417 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000341405 s, 12.0 MB/s 00:20:43.417 12:19:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:43.417 12:19:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:20:43.417 12:19:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:43.417 12:19:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:43.417 12:19:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:20:43.417 12:19:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:43.417 12:19:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:43.417 12:19:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:20:43.676 /dev/nbd1 00:20:43.935 12:19:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:43.935 12:19:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:43.936 12:19:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:20:43.936 12:19:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:20:43.936 12:19:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:43.936 12:19:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:43.936 12:19:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:20:43.936 12:19:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:20:43.936 12:19:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:43.936 12:19:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:43.936 12:19:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:43.936 1+0 records in 00:20:43.936 1+0 records out 00:20:43.936 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000296501 s, 13.8 MB/s 00:20:43.936 12:19:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:43.936 12:19:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:20:43.936 12:19:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:43.936 12:19:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:43.936 12:19:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:20:43.936 12:19:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:43.936 12:19:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:43.936 12:19:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:20:43.936 12:19:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:20:43.936 12:19:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:43.936 12:19:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:43.936 12:19:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:43.936 12:19:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:20:43.936 12:19:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:43.936 12:19:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:44.585 12:19:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:44.585 12:19:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:44.585 12:19:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:44.585 12:19:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:44.585 12:19:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:44.585 12:19:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:44.585 12:19:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:20:44.585 12:19:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:20:44.585 12:19:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:44.585 12:19:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:20:44.585 12:19:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:44.585 12:19:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:44.585 12:19:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:44.585 12:19:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:44.585 12:19:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:44.585 12:19:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:44.585 12:19:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:20:44.585 12:19:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:20:44.585 12:19:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:20:44.585 12:19:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81939 00:20:44.585 12:19:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 81939 ']' 00:20:44.585 12:19:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 81939 00:20:44.585 12:19:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:20:44.585 12:19:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:44.585 12:19:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81939 00:20:44.843 killing process with pid 81939 00:20:44.843 Received shutdown signal, test time was about 60.000000 seconds 00:20:44.843 00:20:44.843 Latency(us) 00:20:44.843 [2024-11-25T12:19:40.934Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:44.843 [2024-11-25T12:19:40.934Z] =================================================================================================================== 00:20:44.843 [2024-11-25T12:19:40.934Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:44.843 12:19:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:44.843 12:19:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:44.844 12:19:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81939' 00:20:44.844 12:19:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 81939 00:20:44.844 12:19:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 81939 00:20:44.844 [2024-11-25 12:19:40.681660] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:45.102 [2024-11-25 12:19:41.036497] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:46.039 12:19:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:20:46.039 00:20:46.039 real 0m16.586s 00:20:46.039 user 0m21.184s 00:20:46.039 sys 0m2.126s 00:20:46.039 12:19:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:46.039 ************************************ 00:20:46.039 END TEST raid5f_rebuild_test 00:20:46.039 ************************************ 00:20:46.039 12:19:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:46.039 12:19:42 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:20:46.039 12:19:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:20:46.039 12:19:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:46.039 12:19:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:46.039 ************************************ 00:20:46.039 START TEST raid5f_rebuild_test_sb 00:20:46.039 ************************************ 00:20:46.039 12:19:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:20:46.039 12:19:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:20:46.039 12:19:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:20:46.039 12:19:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:20:46.039 12:19:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:20:46.039 12:19:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:20:46.039 12:19:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:46.039 12:19:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:46.039 12:19:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:46.039 12:19:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:46.039 12:19:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:46.039 12:19:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:46.039 12:19:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:46.039 12:19:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:46.039 12:19:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:20:46.039 12:19:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:46.039 12:19:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:46.039 12:19:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:46.039 12:19:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:46.039 12:19:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:46.039 12:19:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:46.040 12:19:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:46.040 12:19:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:46.040 12:19:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:46.040 12:19:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:20:46.040 12:19:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:20:46.040 12:19:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:20:46.040 12:19:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:20:46.040 12:19:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:20:46.040 12:19:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:20:46.040 12:19:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82391 00:20:46.040 12:19:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82391 00:20:46.040 12:19:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:46.040 12:19:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 82391 ']' 00:20:46.040 12:19:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:46.040 12:19:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:46.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:46.040 12:19:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:46.040 12:19:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:46.040 12:19:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.298 [2024-11-25 12:19:42.210861] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:20:46.298 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:46.298 Zero copy mechanism will not be used. 00:20:46.298 [2024-11-25 12:19:42.211040] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82391 ] 00:20:46.557 [2024-11-25 12:19:42.391919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.557 [2024-11-25 12:19:42.522375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:46.815 [2024-11-25 12:19:42.728505] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:46.815 [2024-11-25 12:19:42.728561] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:47.073 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:47.073 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:20:47.073 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:47.073 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:47.073 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.073 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.332 BaseBdev1_malloc 00:20:47.332 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.332 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:47.332 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.332 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.332 [2024-11-25 12:19:43.191906] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:47.332 [2024-11-25 12:19:43.192006] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:47.332 [2024-11-25 12:19:43.192047] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:47.332 [2024-11-25 12:19:43.192066] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:47.332 [2024-11-25 12:19:43.194962] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:47.332 [2024-11-25 12:19:43.195156] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:47.332 BaseBdev1 00:20:47.332 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.332 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:47.332 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:47.332 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.332 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.332 BaseBdev2_malloc 00:20:47.332 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.332 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:47.332 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.332 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.332 [2024-11-25 12:19:43.240389] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:47.332 [2024-11-25 12:19:43.240466] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:47.332 [2024-11-25 12:19:43.240495] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:47.332 [2024-11-25 12:19:43.240515] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:47.333 [2024-11-25 12:19:43.243471] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:47.333 [2024-11-25 12:19:43.243523] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:47.333 BaseBdev2 00:20:47.333 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.333 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:47.333 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:47.333 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.333 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.333 BaseBdev3_malloc 00:20:47.333 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.333 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:20:47.333 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.333 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.333 [2024-11-25 12:19:43.300069] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:20:47.333 [2024-11-25 12:19:43.300284] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:47.333 [2024-11-25 12:19:43.300332] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:47.333 [2024-11-25 12:19:43.300375] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:47.333 [2024-11-25 12:19:43.303163] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:47.333 [2024-11-25 12:19:43.303219] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:47.333 BaseBdev3 00:20:47.333 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.333 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:20:47.333 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.333 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.333 spare_malloc 00:20:47.333 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.333 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:47.333 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.333 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.333 spare_delay 00:20:47.333 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.333 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:47.333 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.333 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.333 [2024-11-25 12:19:43.359957] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:47.333 [2024-11-25 12:19:43.360029] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:47.333 [2024-11-25 12:19:43.360056] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:20:47.333 [2024-11-25 12:19:43.360074] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:47.333 [2024-11-25 12:19:43.362862] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:47.333 [2024-11-25 12:19:43.363041] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:47.333 spare 00:20:47.333 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.333 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:20:47.333 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.333 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.333 [2024-11-25 12:19:43.368090] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:47.333 [2024-11-25 12:19:43.370667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:47.333 [2024-11-25 12:19:43.370768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:47.333 [2024-11-25 12:19:43.371017] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:47.333 [2024-11-25 12:19:43.371042] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:20:47.333 [2024-11-25 12:19:43.371384] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:47.333 [2024-11-25 12:19:43.376545] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:47.333 [2024-11-25 12:19:43.376578] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:47.333 [2024-11-25 12:19:43.376811] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:47.333 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.333 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:47.333 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:47.333 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:47.333 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:47.333 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:47.333 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:47.333 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:47.333 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:47.333 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:47.333 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:47.333 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:47.333 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:47.333 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.333 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.333 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.333 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:47.333 "name": "raid_bdev1", 00:20:47.333 "uuid": "90f360ba-8166-46ac-b16f-a3c40a621d48", 00:20:47.333 "strip_size_kb": 64, 00:20:47.333 "state": "online", 00:20:47.333 "raid_level": "raid5f", 00:20:47.333 "superblock": true, 00:20:47.333 "num_base_bdevs": 3, 00:20:47.333 "num_base_bdevs_discovered": 3, 00:20:47.333 "num_base_bdevs_operational": 3, 00:20:47.333 "base_bdevs_list": [ 00:20:47.333 { 00:20:47.333 "name": "BaseBdev1", 00:20:47.333 "uuid": "397b77ca-c8de-5e50-90f3-c00fc2546f20", 00:20:47.333 "is_configured": true, 00:20:47.333 "data_offset": 2048, 00:20:47.333 "data_size": 63488 00:20:47.333 }, 00:20:47.333 { 00:20:47.333 "name": "BaseBdev2", 00:20:47.333 "uuid": "ed7b23fe-d1db-5dab-8364-76c40fc1b5af", 00:20:47.333 "is_configured": true, 00:20:47.333 "data_offset": 2048, 00:20:47.333 "data_size": 63488 00:20:47.333 }, 00:20:47.333 { 00:20:47.333 "name": "BaseBdev3", 00:20:47.333 "uuid": "285f89d8-0f89-5d53-aef8-77f533be6f83", 00:20:47.333 "is_configured": true, 00:20:47.333 "data_offset": 2048, 00:20:47.333 "data_size": 63488 00:20:47.333 } 00:20:47.333 ] 00:20:47.333 }' 00:20:47.333 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:47.333 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.902 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:47.902 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:47.902 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.902 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.902 [2024-11-25 12:19:43.882822] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:47.902 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.902 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:20:47.902 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:47.902 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:47.902 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.902 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.902 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.902 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:20:47.902 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:20:47.902 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:20:47.902 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:20:47.902 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:20:47.902 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:47.902 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:47.902 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:47.902 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:47.902 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:47.902 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:20:47.902 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:47.902 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:47.902 12:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:48.161 [2024-11-25 12:19:44.206734] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:20:48.161 /dev/nbd0 00:20:48.161 12:19:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:48.419 12:19:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:48.419 12:19:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:48.419 12:19:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:20:48.419 12:19:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:48.419 12:19:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:48.419 12:19:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:48.419 12:19:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:20:48.419 12:19:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:48.419 12:19:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:48.419 12:19:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:48.419 1+0 records in 00:20:48.419 1+0 records out 00:20:48.419 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00033821 s, 12.1 MB/s 00:20:48.419 12:19:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:48.419 12:19:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:20:48.419 12:19:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:48.419 12:19:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:48.419 12:19:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:20:48.419 12:19:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:48.419 12:19:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:48.419 12:19:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:20:48.419 12:19:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:20:48.419 12:19:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:20:48.419 12:19:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:20:48.678 496+0 records in 00:20:48.678 496+0 records out 00:20:48.678 65011712 bytes (65 MB, 62 MiB) copied, 0.428192 s, 152 MB/s 00:20:48.678 12:19:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:20:48.678 12:19:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:48.678 12:19:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:48.678 12:19:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:48.678 12:19:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:20:48.678 12:19:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:48.678 12:19:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:48.937 12:19:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:48.937 [2024-11-25 12:19:45.001780] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:48.937 12:19:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:48.937 12:19:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:48.937 12:19:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:48.937 12:19:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:48.937 12:19:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:48.937 12:19:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:20:48.937 12:19:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:20:48.937 12:19:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:48.937 12:19:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.937 12:19:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:48.937 [2024-11-25 12:19:45.015622] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:48.937 12:19:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.937 12:19:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:20:48.937 12:19:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:48.937 12:19:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:48.937 12:19:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:48.937 12:19:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:48.937 12:19:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:48.937 12:19:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:48.937 12:19:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:48.937 12:19:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:48.937 12:19:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:48.937 12:19:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:48.937 12:19:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.937 12:19:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:48.937 12:19:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:49.196 12:19:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.196 12:19:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:49.196 "name": "raid_bdev1", 00:20:49.196 "uuid": "90f360ba-8166-46ac-b16f-a3c40a621d48", 00:20:49.196 "strip_size_kb": 64, 00:20:49.196 "state": "online", 00:20:49.196 "raid_level": "raid5f", 00:20:49.196 "superblock": true, 00:20:49.196 "num_base_bdevs": 3, 00:20:49.196 "num_base_bdevs_discovered": 2, 00:20:49.196 "num_base_bdevs_operational": 2, 00:20:49.196 "base_bdevs_list": [ 00:20:49.196 { 00:20:49.196 "name": null, 00:20:49.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:49.196 "is_configured": false, 00:20:49.196 "data_offset": 0, 00:20:49.196 "data_size": 63488 00:20:49.196 }, 00:20:49.196 { 00:20:49.196 "name": "BaseBdev2", 00:20:49.196 "uuid": "ed7b23fe-d1db-5dab-8364-76c40fc1b5af", 00:20:49.196 "is_configured": true, 00:20:49.196 "data_offset": 2048, 00:20:49.196 "data_size": 63488 00:20:49.196 }, 00:20:49.196 { 00:20:49.196 "name": "BaseBdev3", 00:20:49.196 "uuid": "285f89d8-0f89-5d53-aef8-77f533be6f83", 00:20:49.196 "is_configured": true, 00:20:49.196 "data_offset": 2048, 00:20:49.196 "data_size": 63488 00:20:49.196 } 00:20:49.196 ] 00:20:49.196 }' 00:20:49.196 12:19:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:49.196 12:19:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:49.456 12:19:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:49.456 12:19:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.456 12:19:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:49.456 [2024-11-25 12:19:45.536224] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:49.715 [2024-11-25 12:19:45.552119] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:20:49.715 12:19:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.715 12:19:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:49.715 [2024-11-25 12:19:45.559681] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:50.652 12:19:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:50.652 12:19:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:50.652 12:19:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:50.652 12:19:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:50.652 12:19:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:50.652 12:19:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:50.652 12:19:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:50.652 12:19:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.652 12:19:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:50.652 12:19:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.652 12:19:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:50.652 "name": "raid_bdev1", 00:20:50.652 "uuid": "90f360ba-8166-46ac-b16f-a3c40a621d48", 00:20:50.652 "strip_size_kb": 64, 00:20:50.652 "state": "online", 00:20:50.652 "raid_level": "raid5f", 00:20:50.652 "superblock": true, 00:20:50.652 "num_base_bdevs": 3, 00:20:50.652 "num_base_bdevs_discovered": 3, 00:20:50.652 "num_base_bdevs_operational": 3, 00:20:50.652 "process": { 00:20:50.652 "type": "rebuild", 00:20:50.652 "target": "spare", 00:20:50.652 "progress": { 00:20:50.652 "blocks": 18432, 00:20:50.652 "percent": 14 00:20:50.652 } 00:20:50.652 }, 00:20:50.652 "base_bdevs_list": [ 00:20:50.652 { 00:20:50.652 "name": "spare", 00:20:50.652 "uuid": "ac01d91a-cab9-5fd9-bfb4-823539e1c745", 00:20:50.652 "is_configured": true, 00:20:50.652 "data_offset": 2048, 00:20:50.652 "data_size": 63488 00:20:50.652 }, 00:20:50.652 { 00:20:50.652 "name": "BaseBdev2", 00:20:50.652 "uuid": "ed7b23fe-d1db-5dab-8364-76c40fc1b5af", 00:20:50.652 "is_configured": true, 00:20:50.652 "data_offset": 2048, 00:20:50.652 "data_size": 63488 00:20:50.652 }, 00:20:50.652 { 00:20:50.652 "name": "BaseBdev3", 00:20:50.652 "uuid": "285f89d8-0f89-5d53-aef8-77f533be6f83", 00:20:50.652 "is_configured": true, 00:20:50.652 "data_offset": 2048, 00:20:50.652 "data_size": 63488 00:20:50.652 } 00:20:50.652 ] 00:20:50.652 }' 00:20:50.652 12:19:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:50.652 12:19:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:50.652 12:19:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:50.652 12:19:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:50.652 12:19:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:50.652 12:19:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.652 12:19:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:50.652 [2024-11-25 12:19:46.717605] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:50.911 [2024-11-25 12:19:46.775151] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:50.911 [2024-11-25 12:19:46.775234] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:50.911 [2024-11-25 12:19:46.775265] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:50.911 [2024-11-25 12:19:46.775278] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:50.911 12:19:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.911 12:19:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:20:50.911 12:19:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:50.911 12:19:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:50.911 12:19:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:50.911 12:19:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:50.911 12:19:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:50.911 12:19:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:50.911 12:19:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:50.911 12:19:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:50.911 12:19:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:50.911 12:19:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:50.911 12:19:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:50.911 12:19:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.911 12:19:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:50.911 12:19:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.911 12:19:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:50.911 "name": "raid_bdev1", 00:20:50.911 "uuid": "90f360ba-8166-46ac-b16f-a3c40a621d48", 00:20:50.911 "strip_size_kb": 64, 00:20:50.911 "state": "online", 00:20:50.911 "raid_level": "raid5f", 00:20:50.911 "superblock": true, 00:20:50.911 "num_base_bdevs": 3, 00:20:50.911 "num_base_bdevs_discovered": 2, 00:20:50.911 "num_base_bdevs_operational": 2, 00:20:50.911 "base_bdevs_list": [ 00:20:50.911 { 00:20:50.911 "name": null, 00:20:50.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:50.911 "is_configured": false, 00:20:50.911 "data_offset": 0, 00:20:50.911 "data_size": 63488 00:20:50.911 }, 00:20:50.911 { 00:20:50.911 "name": "BaseBdev2", 00:20:50.911 "uuid": "ed7b23fe-d1db-5dab-8364-76c40fc1b5af", 00:20:50.911 "is_configured": true, 00:20:50.911 "data_offset": 2048, 00:20:50.911 "data_size": 63488 00:20:50.911 }, 00:20:50.911 { 00:20:50.911 "name": "BaseBdev3", 00:20:50.911 "uuid": "285f89d8-0f89-5d53-aef8-77f533be6f83", 00:20:50.911 "is_configured": true, 00:20:50.911 "data_offset": 2048, 00:20:50.911 "data_size": 63488 00:20:50.911 } 00:20:50.911 ] 00:20:50.911 }' 00:20:50.911 12:19:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:50.911 12:19:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:51.507 12:19:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:51.507 12:19:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:51.507 12:19:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:51.507 12:19:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:51.507 12:19:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:51.507 12:19:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:51.507 12:19:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:51.507 12:19:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.507 12:19:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:51.507 12:19:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.507 12:19:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:51.507 "name": "raid_bdev1", 00:20:51.507 "uuid": "90f360ba-8166-46ac-b16f-a3c40a621d48", 00:20:51.507 "strip_size_kb": 64, 00:20:51.507 "state": "online", 00:20:51.507 "raid_level": "raid5f", 00:20:51.507 "superblock": true, 00:20:51.507 "num_base_bdevs": 3, 00:20:51.507 "num_base_bdevs_discovered": 2, 00:20:51.507 "num_base_bdevs_operational": 2, 00:20:51.507 "base_bdevs_list": [ 00:20:51.507 { 00:20:51.507 "name": null, 00:20:51.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:51.507 "is_configured": false, 00:20:51.507 "data_offset": 0, 00:20:51.507 "data_size": 63488 00:20:51.507 }, 00:20:51.507 { 00:20:51.507 "name": "BaseBdev2", 00:20:51.507 "uuid": "ed7b23fe-d1db-5dab-8364-76c40fc1b5af", 00:20:51.507 "is_configured": true, 00:20:51.507 "data_offset": 2048, 00:20:51.507 "data_size": 63488 00:20:51.507 }, 00:20:51.507 { 00:20:51.507 "name": "BaseBdev3", 00:20:51.507 "uuid": "285f89d8-0f89-5d53-aef8-77f533be6f83", 00:20:51.507 "is_configured": true, 00:20:51.507 "data_offset": 2048, 00:20:51.507 "data_size": 63488 00:20:51.507 } 00:20:51.507 ] 00:20:51.507 }' 00:20:51.507 12:19:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:51.507 12:19:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:51.507 12:19:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:51.507 12:19:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:51.507 12:19:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:51.507 12:19:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.507 12:19:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:51.507 [2024-11-25 12:19:47.474161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:51.507 [2024-11-25 12:19:47.488773] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:20:51.507 12:19:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.507 12:19:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:51.507 [2024-11-25 12:19:47.495960] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:52.443 12:19:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:52.443 12:19:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:52.443 12:19:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:52.443 12:19:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:52.443 12:19:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:52.443 12:19:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:52.443 12:19:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:52.443 12:19:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.443 12:19:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:52.443 12:19:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.704 12:19:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:52.704 "name": "raid_bdev1", 00:20:52.704 "uuid": "90f360ba-8166-46ac-b16f-a3c40a621d48", 00:20:52.704 "strip_size_kb": 64, 00:20:52.704 "state": "online", 00:20:52.704 "raid_level": "raid5f", 00:20:52.704 "superblock": true, 00:20:52.704 "num_base_bdevs": 3, 00:20:52.704 "num_base_bdevs_discovered": 3, 00:20:52.704 "num_base_bdevs_operational": 3, 00:20:52.704 "process": { 00:20:52.704 "type": "rebuild", 00:20:52.704 "target": "spare", 00:20:52.704 "progress": { 00:20:52.704 "blocks": 18432, 00:20:52.704 "percent": 14 00:20:52.704 } 00:20:52.704 }, 00:20:52.704 "base_bdevs_list": [ 00:20:52.704 { 00:20:52.704 "name": "spare", 00:20:52.704 "uuid": "ac01d91a-cab9-5fd9-bfb4-823539e1c745", 00:20:52.704 "is_configured": true, 00:20:52.704 "data_offset": 2048, 00:20:52.704 "data_size": 63488 00:20:52.704 }, 00:20:52.704 { 00:20:52.704 "name": "BaseBdev2", 00:20:52.704 "uuid": "ed7b23fe-d1db-5dab-8364-76c40fc1b5af", 00:20:52.704 "is_configured": true, 00:20:52.704 "data_offset": 2048, 00:20:52.704 "data_size": 63488 00:20:52.704 }, 00:20:52.704 { 00:20:52.704 "name": "BaseBdev3", 00:20:52.704 "uuid": "285f89d8-0f89-5d53-aef8-77f533be6f83", 00:20:52.704 "is_configured": true, 00:20:52.704 "data_offset": 2048, 00:20:52.704 "data_size": 63488 00:20:52.704 } 00:20:52.704 ] 00:20:52.704 }' 00:20:52.704 12:19:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:52.704 12:19:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:52.704 12:19:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:52.704 12:19:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:52.704 12:19:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:20:52.704 12:19:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:20:52.704 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:20:52.704 12:19:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:20:52.704 12:19:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:20:52.704 12:19:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=607 00:20:52.704 12:19:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:52.704 12:19:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:52.704 12:19:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:52.704 12:19:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:52.704 12:19:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:52.704 12:19:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:52.704 12:19:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:52.704 12:19:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.704 12:19:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:52.704 12:19:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:52.704 12:19:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.704 12:19:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:52.704 "name": "raid_bdev1", 00:20:52.704 "uuid": "90f360ba-8166-46ac-b16f-a3c40a621d48", 00:20:52.704 "strip_size_kb": 64, 00:20:52.704 "state": "online", 00:20:52.704 "raid_level": "raid5f", 00:20:52.704 "superblock": true, 00:20:52.704 "num_base_bdevs": 3, 00:20:52.704 "num_base_bdevs_discovered": 3, 00:20:52.704 "num_base_bdevs_operational": 3, 00:20:52.704 "process": { 00:20:52.704 "type": "rebuild", 00:20:52.704 "target": "spare", 00:20:52.704 "progress": { 00:20:52.704 "blocks": 22528, 00:20:52.704 "percent": 17 00:20:52.704 } 00:20:52.704 }, 00:20:52.704 "base_bdevs_list": [ 00:20:52.704 { 00:20:52.704 "name": "spare", 00:20:52.704 "uuid": "ac01d91a-cab9-5fd9-bfb4-823539e1c745", 00:20:52.704 "is_configured": true, 00:20:52.704 "data_offset": 2048, 00:20:52.704 "data_size": 63488 00:20:52.704 }, 00:20:52.704 { 00:20:52.704 "name": "BaseBdev2", 00:20:52.704 "uuid": "ed7b23fe-d1db-5dab-8364-76c40fc1b5af", 00:20:52.704 "is_configured": true, 00:20:52.704 "data_offset": 2048, 00:20:52.704 "data_size": 63488 00:20:52.704 }, 00:20:52.704 { 00:20:52.704 "name": "BaseBdev3", 00:20:52.704 "uuid": "285f89d8-0f89-5d53-aef8-77f533be6f83", 00:20:52.704 "is_configured": true, 00:20:52.704 "data_offset": 2048, 00:20:52.704 "data_size": 63488 00:20:52.704 } 00:20:52.704 ] 00:20:52.704 }' 00:20:52.704 12:19:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:52.704 12:19:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:52.704 12:19:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:52.964 12:19:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:52.964 12:19:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:53.904 12:19:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:53.904 12:19:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:53.904 12:19:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:53.904 12:19:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:53.904 12:19:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:53.904 12:19:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:53.904 12:19:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:53.904 12:19:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:53.904 12:19:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.904 12:19:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:53.904 12:19:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.904 12:19:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:53.904 "name": "raid_bdev1", 00:20:53.904 "uuid": "90f360ba-8166-46ac-b16f-a3c40a621d48", 00:20:53.904 "strip_size_kb": 64, 00:20:53.904 "state": "online", 00:20:53.904 "raid_level": "raid5f", 00:20:53.904 "superblock": true, 00:20:53.904 "num_base_bdevs": 3, 00:20:53.904 "num_base_bdevs_discovered": 3, 00:20:53.904 "num_base_bdevs_operational": 3, 00:20:53.904 "process": { 00:20:53.904 "type": "rebuild", 00:20:53.904 "target": "spare", 00:20:53.905 "progress": { 00:20:53.905 "blocks": 47104, 00:20:53.905 "percent": 37 00:20:53.905 } 00:20:53.905 }, 00:20:53.905 "base_bdevs_list": [ 00:20:53.905 { 00:20:53.905 "name": "spare", 00:20:53.905 "uuid": "ac01d91a-cab9-5fd9-bfb4-823539e1c745", 00:20:53.905 "is_configured": true, 00:20:53.905 "data_offset": 2048, 00:20:53.905 "data_size": 63488 00:20:53.905 }, 00:20:53.905 { 00:20:53.905 "name": "BaseBdev2", 00:20:53.905 "uuid": "ed7b23fe-d1db-5dab-8364-76c40fc1b5af", 00:20:53.905 "is_configured": true, 00:20:53.905 "data_offset": 2048, 00:20:53.905 "data_size": 63488 00:20:53.905 }, 00:20:53.905 { 00:20:53.905 "name": "BaseBdev3", 00:20:53.905 "uuid": "285f89d8-0f89-5d53-aef8-77f533be6f83", 00:20:53.905 "is_configured": true, 00:20:53.905 "data_offset": 2048, 00:20:53.905 "data_size": 63488 00:20:53.905 } 00:20:53.905 ] 00:20:53.905 }' 00:20:53.905 12:19:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:53.905 12:19:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:53.905 12:19:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:53.905 12:19:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:53.905 12:19:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:55.281 12:19:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:55.281 12:19:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:55.281 12:19:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:55.281 12:19:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:55.281 12:19:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:55.281 12:19:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:55.281 12:19:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:55.281 12:19:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.281 12:19:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:55.281 12:19:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:55.281 12:19:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.281 12:19:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:55.281 "name": "raid_bdev1", 00:20:55.281 "uuid": "90f360ba-8166-46ac-b16f-a3c40a621d48", 00:20:55.281 "strip_size_kb": 64, 00:20:55.281 "state": "online", 00:20:55.281 "raid_level": "raid5f", 00:20:55.281 "superblock": true, 00:20:55.281 "num_base_bdevs": 3, 00:20:55.281 "num_base_bdevs_discovered": 3, 00:20:55.281 "num_base_bdevs_operational": 3, 00:20:55.281 "process": { 00:20:55.281 "type": "rebuild", 00:20:55.281 "target": "spare", 00:20:55.281 "progress": { 00:20:55.281 "blocks": 69632, 00:20:55.281 "percent": 54 00:20:55.281 } 00:20:55.281 }, 00:20:55.281 "base_bdevs_list": [ 00:20:55.281 { 00:20:55.281 "name": "spare", 00:20:55.281 "uuid": "ac01d91a-cab9-5fd9-bfb4-823539e1c745", 00:20:55.281 "is_configured": true, 00:20:55.281 "data_offset": 2048, 00:20:55.281 "data_size": 63488 00:20:55.281 }, 00:20:55.281 { 00:20:55.281 "name": "BaseBdev2", 00:20:55.281 "uuid": "ed7b23fe-d1db-5dab-8364-76c40fc1b5af", 00:20:55.281 "is_configured": true, 00:20:55.281 "data_offset": 2048, 00:20:55.281 "data_size": 63488 00:20:55.281 }, 00:20:55.281 { 00:20:55.281 "name": "BaseBdev3", 00:20:55.281 "uuid": "285f89d8-0f89-5d53-aef8-77f533be6f83", 00:20:55.281 "is_configured": true, 00:20:55.281 "data_offset": 2048, 00:20:55.281 "data_size": 63488 00:20:55.281 } 00:20:55.281 ] 00:20:55.281 }' 00:20:55.281 12:19:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:55.281 12:19:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:55.281 12:19:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:55.281 12:19:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:55.281 12:19:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:56.237 12:19:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:56.237 12:19:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:56.237 12:19:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:56.237 12:19:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:56.237 12:19:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:56.237 12:19:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:56.237 12:19:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:56.237 12:19:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.237 12:19:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:56.237 12:19:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:56.237 12:19:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.237 12:19:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:56.237 "name": "raid_bdev1", 00:20:56.237 "uuid": "90f360ba-8166-46ac-b16f-a3c40a621d48", 00:20:56.237 "strip_size_kb": 64, 00:20:56.237 "state": "online", 00:20:56.237 "raid_level": "raid5f", 00:20:56.237 "superblock": true, 00:20:56.237 "num_base_bdevs": 3, 00:20:56.237 "num_base_bdevs_discovered": 3, 00:20:56.237 "num_base_bdevs_operational": 3, 00:20:56.237 "process": { 00:20:56.237 "type": "rebuild", 00:20:56.237 "target": "spare", 00:20:56.237 "progress": { 00:20:56.237 "blocks": 94208, 00:20:56.237 "percent": 74 00:20:56.237 } 00:20:56.237 }, 00:20:56.237 "base_bdevs_list": [ 00:20:56.237 { 00:20:56.237 "name": "spare", 00:20:56.237 "uuid": "ac01d91a-cab9-5fd9-bfb4-823539e1c745", 00:20:56.237 "is_configured": true, 00:20:56.237 "data_offset": 2048, 00:20:56.237 "data_size": 63488 00:20:56.237 }, 00:20:56.237 { 00:20:56.237 "name": "BaseBdev2", 00:20:56.237 "uuid": "ed7b23fe-d1db-5dab-8364-76c40fc1b5af", 00:20:56.237 "is_configured": true, 00:20:56.237 "data_offset": 2048, 00:20:56.237 "data_size": 63488 00:20:56.237 }, 00:20:56.237 { 00:20:56.237 "name": "BaseBdev3", 00:20:56.237 "uuid": "285f89d8-0f89-5d53-aef8-77f533be6f83", 00:20:56.237 "is_configured": true, 00:20:56.237 "data_offset": 2048, 00:20:56.237 "data_size": 63488 00:20:56.237 } 00:20:56.237 ] 00:20:56.238 }' 00:20:56.238 12:19:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:56.238 12:19:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:56.238 12:19:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:56.238 12:19:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:56.238 12:19:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:57.615 12:19:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:57.615 12:19:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:57.615 12:19:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:57.615 12:19:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:57.615 12:19:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:57.615 12:19:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:57.616 12:19:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:57.616 12:19:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:57.616 12:19:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.616 12:19:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:57.616 12:19:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.616 12:19:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:57.616 "name": "raid_bdev1", 00:20:57.616 "uuid": "90f360ba-8166-46ac-b16f-a3c40a621d48", 00:20:57.616 "strip_size_kb": 64, 00:20:57.616 "state": "online", 00:20:57.616 "raid_level": "raid5f", 00:20:57.616 "superblock": true, 00:20:57.616 "num_base_bdevs": 3, 00:20:57.616 "num_base_bdevs_discovered": 3, 00:20:57.616 "num_base_bdevs_operational": 3, 00:20:57.616 "process": { 00:20:57.616 "type": "rebuild", 00:20:57.616 "target": "spare", 00:20:57.616 "progress": { 00:20:57.616 "blocks": 116736, 00:20:57.616 "percent": 91 00:20:57.616 } 00:20:57.616 }, 00:20:57.616 "base_bdevs_list": [ 00:20:57.616 { 00:20:57.616 "name": "spare", 00:20:57.616 "uuid": "ac01d91a-cab9-5fd9-bfb4-823539e1c745", 00:20:57.616 "is_configured": true, 00:20:57.616 "data_offset": 2048, 00:20:57.616 "data_size": 63488 00:20:57.616 }, 00:20:57.616 { 00:20:57.616 "name": "BaseBdev2", 00:20:57.616 "uuid": "ed7b23fe-d1db-5dab-8364-76c40fc1b5af", 00:20:57.616 "is_configured": true, 00:20:57.616 "data_offset": 2048, 00:20:57.616 "data_size": 63488 00:20:57.616 }, 00:20:57.616 { 00:20:57.616 "name": "BaseBdev3", 00:20:57.616 "uuid": "285f89d8-0f89-5d53-aef8-77f533be6f83", 00:20:57.616 "is_configured": true, 00:20:57.616 "data_offset": 2048, 00:20:57.616 "data_size": 63488 00:20:57.616 } 00:20:57.616 ] 00:20:57.616 }' 00:20:57.616 12:19:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:57.616 12:19:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:57.616 12:19:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:57.616 12:19:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:57.616 12:19:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:57.874 [2024-11-25 12:19:53.783063] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:57.874 [2024-11-25 12:19:53.783166] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:57.874 [2024-11-25 12:19:53.783316] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:58.482 12:19:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:58.482 12:19:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:58.482 12:19:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:58.482 12:19:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:58.482 12:19:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:58.482 12:19:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:58.482 12:19:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:58.482 12:19:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.482 12:19:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:58.482 12:19:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:58.482 12:19:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.482 12:19:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:58.482 "name": "raid_bdev1", 00:20:58.482 "uuid": "90f360ba-8166-46ac-b16f-a3c40a621d48", 00:20:58.482 "strip_size_kb": 64, 00:20:58.482 "state": "online", 00:20:58.482 "raid_level": "raid5f", 00:20:58.482 "superblock": true, 00:20:58.482 "num_base_bdevs": 3, 00:20:58.482 "num_base_bdevs_discovered": 3, 00:20:58.482 "num_base_bdevs_operational": 3, 00:20:58.482 "base_bdevs_list": [ 00:20:58.482 { 00:20:58.482 "name": "spare", 00:20:58.482 "uuid": "ac01d91a-cab9-5fd9-bfb4-823539e1c745", 00:20:58.482 "is_configured": true, 00:20:58.482 "data_offset": 2048, 00:20:58.482 "data_size": 63488 00:20:58.482 }, 00:20:58.482 { 00:20:58.482 "name": "BaseBdev2", 00:20:58.482 "uuid": "ed7b23fe-d1db-5dab-8364-76c40fc1b5af", 00:20:58.482 "is_configured": true, 00:20:58.482 "data_offset": 2048, 00:20:58.482 "data_size": 63488 00:20:58.482 }, 00:20:58.482 { 00:20:58.482 "name": "BaseBdev3", 00:20:58.482 "uuid": "285f89d8-0f89-5d53-aef8-77f533be6f83", 00:20:58.482 "is_configured": true, 00:20:58.482 "data_offset": 2048, 00:20:58.482 "data_size": 63488 00:20:58.482 } 00:20:58.482 ] 00:20:58.482 }' 00:20:58.482 12:19:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:58.739 12:19:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:58.739 12:19:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:58.739 12:19:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:58.739 12:19:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:20:58.739 12:19:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:58.739 12:19:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:58.739 12:19:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:58.739 12:19:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:58.739 12:19:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:58.739 12:19:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:58.739 12:19:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.739 12:19:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:58.739 12:19:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:58.739 12:19:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.739 12:19:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:58.739 "name": "raid_bdev1", 00:20:58.739 "uuid": "90f360ba-8166-46ac-b16f-a3c40a621d48", 00:20:58.739 "strip_size_kb": 64, 00:20:58.739 "state": "online", 00:20:58.739 "raid_level": "raid5f", 00:20:58.739 "superblock": true, 00:20:58.739 "num_base_bdevs": 3, 00:20:58.739 "num_base_bdevs_discovered": 3, 00:20:58.739 "num_base_bdevs_operational": 3, 00:20:58.739 "base_bdevs_list": [ 00:20:58.739 { 00:20:58.739 "name": "spare", 00:20:58.739 "uuid": "ac01d91a-cab9-5fd9-bfb4-823539e1c745", 00:20:58.739 "is_configured": true, 00:20:58.739 "data_offset": 2048, 00:20:58.739 "data_size": 63488 00:20:58.739 }, 00:20:58.739 { 00:20:58.739 "name": "BaseBdev2", 00:20:58.739 "uuid": "ed7b23fe-d1db-5dab-8364-76c40fc1b5af", 00:20:58.739 "is_configured": true, 00:20:58.739 "data_offset": 2048, 00:20:58.739 "data_size": 63488 00:20:58.739 }, 00:20:58.739 { 00:20:58.739 "name": "BaseBdev3", 00:20:58.739 "uuid": "285f89d8-0f89-5d53-aef8-77f533be6f83", 00:20:58.739 "is_configured": true, 00:20:58.739 "data_offset": 2048, 00:20:58.739 "data_size": 63488 00:20:58.739 } 00:20:58.739 ] 00:20:58.739 }' 00:20:58.739 12:19:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:58.739 12:19:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:58.739 12:19:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:58.739 12:19:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:58.739 12:19:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:58.739 12:19:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:58.739 12:19:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:58.739 12:19:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:58.739 12:19:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:58.739 12:19:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:58.739 12:19:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:58.739 12:19:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:58.739 12:19:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:58.739 12:19:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:58.739 12:19:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:58.739 12:19:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.739 12:19:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:58.739 12:19:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:58.739 12:19:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.739 12:19:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:58.739 "name": "raid_bdev1", 00:20:58.739 "uuid": "90f360ba-8166-46ac-b16f-a3c40a621d48", 00:20:58.739 "strip_size_kb": 64, 00:20:58.739 "state": "online", 00:20:58.739 "raid_level": "raid5f", 00:20:58.739 "superblock": true, 00:20:58.739 "num_base_bdevs": 3, 00:20:58.739 "num_base_bdevs_discovered": 3, 00:20:58.739 "num_base_bdevs_operational": 3, 00:20:58.739 "base_bdevs_list": [ 00:20:58.739 { 00:20:58.739 "name": "spare", 00:20:58.739 "uuid": "ac01d91a-cab9-5fd9-bfb4-823539e1c745", 00:20:58.739 "is_configured": true, 00:20:58.739 "data_offset": 2048, 00:20:58.739 "data_size": 63488 00:20:58.739 }, 00:20:58.739 { 00:20:58.739 "name": "BaseBdev2", 00:20:58.739 "uuid": "ed7b23fe-d1db-5dab-8364-76c40fc1b5af", 00:20:58.739 "is_configured": true, 00:20:58.739 "data_offset": 2048, 00:20:58.739 "data_size": 63488 00:20:58.739 }, 00:20:58.739 { 00:20:58.739 "name": "BaseBdev3", 00:20:58.739 "uuid": "285f89d8-0f89-5d53-aef8-77f533be6f83", 00:20:58.739 "is_configured": true, 00:20:58.739 "data_offset": 2048, 00:20:58.739 "data_size": 63488 00:20:58.739 } 00:20:58.739 ] 00:20:58.739 }' 00:20:58.739 12:19:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:58.739 12:19:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:59.303 12:19:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:59.303 12:19:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.303 12:19:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:59.303 [2024-11-25 12:19:55.262735] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:59.303 [2024-11-25 12:19:55.262775] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:59.303 [2024-11-25 12:19:55.262888] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:59.303 [2024-11-25 12:19:55.263010] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:59.303 [2024-11-25 12:19:55.263035] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:59.303 12:19:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.303 12:19:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:59.303 12:19:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:20:59.303 12:19:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.303 12:19:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:59.303 12:19:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.303 12:19:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:59.303 12:19:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:20:59.303 12:19:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:20:59.303 12:19:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:59.303 12:19:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:59.304 12:19:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:59.304 12:19:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:59.304 12:19:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:59.304 12:19:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:59.304 12:19:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:20:59.304 12:19:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:59.304 12:19:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:59.304 12:19:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:59.560 /dev/nbd0 00:20:59.818 12:19:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:59.818 12:19:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:59.818 12:19:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:59.818 12:19:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:20:59.818 12:19:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:59.818 12:19:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:59.818 12:19:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:59.818 12:19:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:20:59.818 12:19:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:59.818 12:19:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:59.818 12:19:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:59.818 1+0 records in 00:20:59.818 1+0 records out 00:20:59.818 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000326404 s, 12.5 MB/s 00:20:59.818 12:19:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:59.818 12:19:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:20:59.818 12:19:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:59.818 12:19:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:59.818 12:19:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:20:59.818 12:19:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:59.818 12:19:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:59.818 12:19:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:21:00.076 /dev/nbd1 00:21:00.076 12:19:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:00.076 12:19:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:00.076 12:19:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:21:00.076 12:19:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:21:00.077 12:19:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:00.077 12:19:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:00.077 12:19:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:21:00.077 12:19:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:21:00.077 12:19:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:00.077 12:19:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:00.077 12:19:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:00.077 1+0 records in 00:21:00.077 1+0 records out 00:21:00.077 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283803 s, 14.4 MB/s 00:21:00.077 12:19:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:00.077 12:19:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:21:00.077 12:19:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:00.077 12:19:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:00.077 12:19:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:21:00.077 12:19:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:00.077 12:19:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:00.077 12:19:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:00.077 12:19:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:21:00.077 12:19:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:00.077 12:19:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:00.077 12:19:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:00.077 12:19:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:21:00.077 12:19:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:00.077 12:19:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:00.335 12:19:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:00.593 12:19:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:00.593 12:19:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:00.593 12:19:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:00.593 12:19:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:00.593 12:19:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:00.593 12:19:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:21:00.593 12:19:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:21:00.593 12:19:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:00.593 12:19:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:21:00.853 12:19:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:00.853 12:19:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:00.853 12:19:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:00.853 12:19:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:00.853 12:19:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:00.853 12:19:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:00.853 12:19:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:21:00.853 12:19:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:21:00.853 12:19:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:21:00.853 12:19:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:21:00.853 12:19:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.853 12:19:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:00.853 12:19:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.853 12:19:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:00.853 12:19:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.853 12:19:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:00.853 [2024-11-25 12:19:56.721112] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:00.853 [2024-11-25 12:19:56.721197] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:00.853 [2024-11-25 12:19:56.721230] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:21:00.853 [2024-11-25 12:19:56.721248] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:00.853 [2024-11-25 12:19:56.724276] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:00.853 [2024-11-25 12:19:56.724332] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:00.853 [2024-11-25 12:19:56.724482] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:00.853 [2024-11-25 12:19:56.724552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:00.853 [2024-11-25 12:19:56.724726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:00.853 [2024-11-25 12:19:56.724884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:00.853 spare 00:21:00.853 12:19:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.853 12:19:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:21:00.853 12:19:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.853 12:19:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:00.853 [2024-11-25 12:19:56.825012] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:21:00.853 [2024-11-25 12:19:56.825065] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:21:00.853 [2024-11-25 12:19:56.825505] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:21:00.853 [2024-11-25 12:19:56.830441] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:21:00.853 [2024-11-25 12:19:56.830610] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:21:00.853 [2024-11-25 12:19:56.830903] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:00.853 12:19:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.853 12:19:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:00.853 12:19:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:00.853 12:19:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:00.853 12:19:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:00.853 12:19:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:00.853 12:19:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:00.853 12:19:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:00.853 12:19:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:00.853 12:19:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:00.853 12:19:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:00.853 12:19:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:00.853 12:19:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:00.853 12:19:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.853 12:19:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:00.853 12:19:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.853 12:19:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:00.853 "name": "raid_bdev1", 00:21:00.853 "uuid": "90f360ba-8166-46ac-b16f-a3c40a621d48", 00:21:00.853 "strip_size_kb": 64, 00:21:00.853 "state": "online", 00:21:00.853 "raid_level": "raid5f", 00:21:00.853 "superblock": true, 00:21:00.853 "num_base_bdevs": 3, 00:21:00.853 "num_base_bdevs_discovered": 3, 00:21:00.853 "num_base_bdevs_operational": 3, 00:21:00.853 "base_bdevs_list": [ 00:21:00.853 { 00:21:00.853 "name": "spare", 00:21:00.853 "uuid": "ac01d91a-cab9-5fd9-bfb4-823539e1c745", 00:21:00.853 "is_configured": true, 00:21:00.853 "data_offset": 2048, 00:21:00.875 "data_size": 63488 00:21:00.875 }, 00:21:00.875 { 00:21:00.875 "name": "BaseBdev2", 00:21:00.875 "uuid": "ed7b23fe-d1db-5dab-8364-76c40fc1b5af", 00:21:00.875 "is_configured": true, 00:21:00.875 "data_offset": 2048, 00:21:00.875 "data_size": 63488 00:21:00.875 }, 00:21:00.875 { 00:21:00.875 "name": "BaseBdev3", 00:21:00.875 "uuid": "285f89d8-0f89-5d53-aef8-77f533be6f83", 00:21:00.875 "is_configured": true, 00:21:00.875 "data_offset": 2048, 00:21:00.875 "data_size": 63488 00:21:00.875 } 00:21:00.875 ] 00:21:00.875 }' 00:21:00.875 12:19:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:00.876 12:19:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:01.443 12:19:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:01.443 12:19:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:01.443 12:19:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:01.443 12:19:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:01.443 12:19:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:01.444 12:19:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:01.444 12:19:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:01.444 12:19:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.444 12:19:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:01.444 12:19:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.444 12:19:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:01.444 "name": "raid_bdev1", 00:21:01.444 "uuid": "90f360ba-8166-46ac-b16f-a3c40a621d48", 00:21:01.444 "strip_size_kb": 64, 00:21:01.444 "state": "online", 00:21:01.444 "raid_level": "raid5f", 00:21:01.444 "superblock": true, 00:21:01.444 "num_base_bdevs": 3, 00:21:01.444 "num_base_bdevs_discovered": 3, 00:21:01.444 "num_base_bdevs_operational": 3, 00:21:01.444 "base_bdevs_list": [ 00:21:01.444 { 00:21:01.444 "name": "spare", 00:21:01.444 "uuid": "ac01d91a-cab9-5fd9-bfb4-823539e1c745", 00:21:01.444 "is_configured": true, 00:21:01.444 "data_offset": 2048, 00:21:01.444 "data_size": 63488 00:21:01.444 }, 00:21:01.444 { 00:21:01.444 "name": "BaseBdev2", 00:21:01.444 "uuid": "ed7b23fe-d1db-5dab-8364-76c40fc1b5af", 00:21:01.444 "is_configured": true, 00:21:01.444 "data_offset": 2048, 00:21:01.444 "data_size": 63488 00:21:01.444 }, 00:21:01.444 { 00:21:01.444 "name": "BaseBdev3", 00:21:01.444 "uuid": "285f89d8-0f89-5d53-aef8-77f533be6f83", 00:21:01.444 "is_configured": true, 00:21:01.444 "data_offset": 2048, 00:21:01.444 "data_size": 63488 00:21:01.444 } 00:21:01.444 ] 00:21:01.444 }' 00:21:01.444 12:19:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:01.444 12:19:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:01.444 12:19:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:01.444 12:19:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:01.444 12:19:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:01.444 12:19:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.444 12:19:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:01.444 12:19:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:01.444 12:19:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.703 12:19:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:21:01.703 12:19:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:01.703 12:19:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.703 12:19:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:01.703 [2024-11-25 12:19:57.548806] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:01.703 12:19:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.703 12:19:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:21:01.703 12:19:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:01.703 12:19:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:01.703 12:19:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:01.703 12:19:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:01.703 12:19:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:01.703 12:19:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:01.703 12:19:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:01.703 12:19:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:01.703 12:19:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:01.703 12:19:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:01.703 12:19:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:01.703 12:19:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.703 12:19:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:01.703 12:19:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.703 12:19:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:01.703 "name": "raid_bdev1", 00:21:01.703 "uuid": "90f360ba-8166-46ac-b16f-a3c40a621d48", 00:21:01.703 "strip_size_kb": 64, 00:21:01.703 "state": "online", 00:21:01.703 "raid_level": "raid5f", 00:21:01.703 "superblock": true, 00:21:01.703 "num_base_bdevs": 3, 00:21:01.703 "num_base_bdevs_discovered": 2, 00:21:01.703 "num_base_bdevs_operational": 2, 00:21:01.703 "base_bdevs_list": [ 00:21:01.703 { 00:21:01.703 "name": null, 00:21:01.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:01.703 "is_configured": false, 00:21:01.703 "data_offset": 0, 00:21:01.703 "data_size": 63488 00:21:01.703 }, 00:21:01.703 { 00:21:01.703 "name": "BaseBdev2", 00:21:01.703 "uuid": "ed7b23fe-d1db-5dab-8364-76c40fc1b5af", 00:21:01.703 "is_configured": true, 00:21:01.703 "data_offset": 2048, 00:21:01.703 "data_size": 63488 00:21:01.703 }, 00:21:01.703 { 00:21:01.703 "name": "BaseBdev3", 00:21:01.703 "uuid": "285f89d8-0f89-5d53-aef8-77f533be6f83", 00:21:01.703 "is_configured": true, 00:21:01.703 "data_offset": 2048, 00:21:01.703 "data_size": 63488 00:21:01.703 } 00:21:01.703 ] 00:21:01.703 }' 00:21:01.703 12:19:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:01.703 12:19:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:02.271 12:19:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:02.271 12:19:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.271 12:19:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:02.271 [2024-11-25 12:19:58.064955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:02.272 [2024-11-25 12:19:58.065184] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:02.272 [2024-11-25 12:19:58.065212] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:02.272 [2024-11-25 12:19:58.065258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:02.272 [2024-11-25 12:19:58.079739] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:21:02.272 12:19:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.272 12:19:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:21:02.272 [2024-11-25 12:19:58.086887] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:03.206 12:19:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:03.206 12:19:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:03.206 12:19:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:03.206 12:19:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:03.206 12:19:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:03.206 12:19:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:03.206 12:19:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.206 12:19:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:03.206 12:19:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:03.206 12:19:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.206 12:19:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:03.206 "name": "raid_bdev1", 00:21:03.206 "uuid": "90f360ba-8166-46ac-b16f-a3c40a621d48", 00:21:03.206 "strip_size_kb": 64, 00:21:03.206 "state": "online", 00:21:03.206 "raid_level": "raid5f", 00:21:03.206 "superblock": true, 00:21:03.206 "num_base_bdevs": 3, 00:21:03.206 "num_base_bdevs_discovered": 3, 00:21:03.206 "num_base_bdevs_operational": 3, 00:21:03.206 "process": { 00:21:03.206 "type": "rebuild", 00:21:03.206 "target": "spare", 00:21:03.206 "progress": { 00:21:03.206 "blocks": 18432, 00:21:03.206 "percent": 14 00:21:03.206 } 00:21:03.206 }, 00:21:03.206 "base_bdevs_list": [ 00:21:03.206 { 00:21:03.206 "name": "spare", 00:21:03.206 "uuid": "ac01d91a-cab9-5fd9-bfb4-823539e1c745", 00:21:03.206 "is_configured": true, 00:21:03.206 "data_offset": 2048, 00:21:03.206 "data_size": 63488 00:21:03.206 }, 00:21:03.206 { 00:21:03.206 "name": "BaseBdev2", 00:21:03.206 "uuid": "ed7b23fe-d1db-5dab-8364-76c40fc1b5af", 00:21:03.206 "is_configured": true, 00:21:03.206 "data_offset": 2048, 00:21:03.206 "data_size": 63488 00:21:03.206 }, 00:21:03.206 { 00:21:03.206 "name": "BaseBdev3", 00:21:03.206 "uuid": "285f89d8-0f89-5d53-aef8-77f533be6f83", 00:21:03.206 "is_configured": true, 00:21:03.206 "data_offset": 2048, 00:21:03.206 "data_size": 63488 00:21:03.206 } 00:21:03.206 ] 00:21:03.206 }' 00:21:03.206 12:19:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:03.206 12:19:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:03.206 12:19:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:03.206 12:19:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:03.206 12:19:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:21:03.206 12:19:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.206 12:19:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:03.206 [2024-11-25 12:19:59.249022] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:03.464 [2024-11-25 12:19:59.302212] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:03.464 [2024-11-25 12:19:59.302321] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:03.464 [2024-11-25 12:19:59.302363] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:03.464 [2024-11-25 12:19:59.302382] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:03.464 12:19:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.464 12:19:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:21:03.464 12:19:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:03.464 12:19:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:03.464 12:19:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:03.464 12:19:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:03.464 12:19:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:03.464 12:19:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:03.464 12:19:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:03.464 12:19:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:03.464 12:19:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:03.464 12:19:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:03.464 12:19:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.464 12:19:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:03.464 12:19:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:03.464 12:19:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.464 12:19:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:03.464 "name": "raid_bdev1", 00:21:03.464 "uuid": "90f360ba-8166-46ac-b16f-a3c40a621d48", 00:21:03.464 "strip_size_kb": 64, 00:21:03.464 "state": "online", 00:21:03.464 "raid_level": "raid5f", 00:21:03.464 "superblock": true, 00:21:03.464 "num_base_bdevs": 3, 00:21:03.464 "num_base_bdevs_discovered": 2, 00:21:03.464 "num_base_bdevs_operational": 2, 00:21:03.464 "base_bdevs_list": [ 00:21:03.464 { 00:21:03.464 "name": null, 00:21:03.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:03.464 "is_configured": false, 00:21:03.464 "data_offset": 0, 00:21:03.464 "data_size": 63488 00:21:03.464 }, 00:21:03.464 { 00:21:03.464 "name": "BaseBdev2", 00:21:03.464 "uuid": "ed7b23fe-d1db-5dab-8364-76c40fc1b5af", 00:21:03.464 "is_configured": true, 00:21:03.464 "data_offset": 2048, 00:21:03.464 "data_size": 63488 00:21:03.464 }, 00:21:03.464 { 00:21:03.464 "name": "BaseBdev3", 00:21:03.464 "uuid": "285f89d8-0f89-5d53-aef8-77f533be6f83", 00:21:03.464 "is_configured": true, 00:21:03.464 "data_offset": 2048, 00:21:03.464 "data_size": 63488 00:21:03.464 } 00:21:03.464 ] 00:21:03.465 }' 00:21:03.465 12:19:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:03.465 12:19:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:03.724 12:19:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:03.724 12:19:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.724 12:19:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:03.724 [2024-11-25 12:19:59.806071] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:03.724 [2024-11-25 12:19:59.806166] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:03.724 [2024-11-25 12:19:59.806204] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:21:03.724 [2024-11-25 12:19:59.806225] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:03.724 [2024-11-25 12:19:59.806863] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:03.724 [2024-11-25 12:19:59.806914] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:03.724 [2024-11-25 12:19:59.807044] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:03.724 [2024-11-25 12:19:59.807070] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:03.724 [2024-11-25 12:19:59.807084] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:03.724 [2024-11-25 12:19:59.807129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:03.982 [2024-11-25 12:19:59.821801] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:21:03.982 spare 00:21:03.982 12:19:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.982 12:19:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:21:03.982 [2024-11-25 12:19:59.829095] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:04.918 12:20:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:04.918 12:20:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:04.918 12:20:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:04.918 12:20:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:04.918 12:20:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:04.918 12:20:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:04.918 12:20:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:04.918 12:20:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.918 12:20:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:04.918 12:20:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.918 12:20:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:04.918 "name": "raid_bdev1", 00:21:04.918 "uuid": "90f360ba-8166-46ac-b16f-a3c40a621d48", 00:21:04.918 "strip_size_kb": 64, 00:21:04.918 "state": "online", 00:21:04.918 "raid_level": "raid5f", 00:21:04.918 "superblock": true, 00:21:04.918 "num_base_bdevs": 3, 00:21:04.918 "num_base_bdevs_discovered": 3, 00:21:04.918 "num_base_bdevs_operational": 3, 00:21:04.918 "process": { 00:21:04.919 "type": "rebuild", 00:21:04.919 "target": "spare", 00:21:04.919 "progress": { 00:21:04.919 "blocks": 18432, 00:21:04.919 "percent": 14 00:21:04.919 } 00:21:04.919 }, 00:21:04.919 "base_bdevs_list": [ 00:21:04.919 { 00:21:04.919 "name": "spare", 00:21:04.919 "uuid": "ac01d91a-cab9-5fd9-bfb4-823539e1c745", 00:21:04.919 "is_configured": true, 00:21:04.919 "data_offset": 2048, 00:21:04.919 "data_size": 63488 00:21:04.919 }, 00:21:04.919 { 00:21:04.919 "name": "BaseBdev2", 00:21:04.919 "uuid": "ed7b23fe-d1db-5dab-8364-76c40fc1b5af", 00:21:04.919 "is_configured": true, 00:21:04.919 "data_offset": 2048, 00:21:04.919 "data_size": 63488 00:21:04.919 }, 00:21:04.919 { 00:21:04.919 "name": "BaseBdev3", 00:21:04.919 "uuid": "285f89d8-0f89-5d53-aef8-77f533be6f83", 00:21:04.919 "is_configured": true, 00:21:04.919 "data_offset": 2048, 00:21:04.919 "data_size": 63488 00:21:04.919 } 00:21:04.919 ] 00:21:04.919 }' 00:21:04.919 12:20:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:04.919 12:20:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:04.919 12:20:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:04.919 12:20:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:04.919 12:20:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:21:04.919 12:20:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.919 12:20:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:04.919 [2024-11-25 12:20:01.003464] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:05.177 [2024-11-25 12:20:01.045226] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:05.177 [2024-11-25 12:20:01.045367] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:05.177 [2024-11-25 12:20:01.045401] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:05.177 [2024-11-25 12:20:01.045413] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:05.177 12:20:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.177 12:20:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:21:05.177 12:20:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:05.177 12:20:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:05.177 12:20:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:05.177 12:20:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:05.177 12:20:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:05.177 12:20:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:05.177 12:20:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:05.177 12:20:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:05.177 12:20:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:05.177 12:20:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:05.177 12:20:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.177 12:20:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.177 12:20:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:05.177 12:20:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.177 12:20:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:05.177 "name": "raid_bdev1", 00:21:05.177 "uuid": "90f360ba-8166-46ac-b16f-a3c40a621d48", 00:21:05.177 "strip_size_kb": 64, 00:21:05.177 "state": "online", 00:21:05.177 "raid_level": "raid5f", 00:21:05.177 "superblock": true, 00:21:05.177 "num_base_bdevs": 3, 00:21:05.177 "num_base_bdevs_discovered": 2, 00:21:05.177 "num_base_bdevs_operational": 2, 00:21:05.177 "base_bdevs_list": [ 00:21:05.177 { 00:21:05.177 "name": null, 00:21:05.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:05.177 "is_configured": false, 00:21:05.177 "data_offset": 0, 00:21:05.177 "data_size": 63488 00:21:05.177 }, 00:21:05.177 { 00:21:05.177 "name": "BaseBdev2", 00:21:05.177 "uuid": "ed7b23fe-d1db-5dab-8364-76c40fc1b5af", 00:21:05.177 "is_configured": true, 00:21:05.177 "data_offset": 2048, 00:21:05.177 "data_size": 63488 00:21:05.177 }, 00:21:05.177 { 00:21:05.177 "name": "BaseBdev3", 00:21:05.177 "uuid": "285f89d8-0f89-5d53-aef8-77f533be6f83", 00:21:05.177 "is_configured": true, 00:21:05.177 "data_offset": 2048, 00:21:05.177 "data_size": 63488 00:21:05.177 } 00:21:05.177 ] 00:21:05.177 }' 00:21:05.177 12:20:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:05.177 12:20:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.741 12:20:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:05.742 12:20:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:05.742 12:20:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:05.742 12:20:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:05.742 12:20:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:05.742 12:20:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:05.742 12:20:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.742 12:20:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.742 12:20:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:05.742 12:20:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.742 12:20:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:05.742 "name": "raid_bdev1", 00:21:05.742 "uuid": "90f360ba-8166-46ac-b16f-a3c40a621d48", 00:21:05.742 "strip_size_kb": 64, 00:21:05.742 "state": "online", 00:21:05.742 "raid_level": "raid5f", 00:21:05.742 "superblock": true, 00:21:05.742 "num_base_bdevs": 3, 00:21:05.742 "num_base_bdevs_discovered": 2, 00:21:05.742 "num_base_bdevs_operational": 2, 00:21:05.742 "base_bdevs_list": [ 00:21:05.742 { 00:21:05.742 "name": null, 00:21:05.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:05.742 "is_configured": false, 00:21:05.742 "data_offset": 0, 00:21:05.742 "data_size": 63488 00:21:05.742 }, 00:21:05.742 { 00:21:05.742 "name": "BaseBdev2", 00:21:05.742 "uuid": "ed7b23fe-d1db-5dab-8364-76c40fc1b5af", 00:21:05.742 "is_configured": true, 00:21:05.742 "data_offset": 2048, 00:21:05.742 "data_size": 63488 00:21:05.742 }, 00:21:05.742 { 00:21:05.742 "name": "BaseBdev3", 00:21:05.742 "uuid": "285f89d8-0f89-5d53-aef8-77f533be6f83", 00:21:05.742 "is_configured": true, 00:21:05.742 "data_offset": 2048, 00:21:05.742 "data_size": 63488 00:21:05.742 } 00:21:05.742 ] 00:21:05.742 }' 00:21:05.742 12:20:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:05.742 12:20:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:05.742 12:20:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:05.742 12:20:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:05.742 12:20:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:21:05.742 12:20:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.742 12:20:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.742 12:20:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.742 12:20:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:05.742 12:20:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.742 12:20:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.742 [2024-11-25 12:20:01.754109] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:05.742 [2024-11-25 12:20:01.754183] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:05.742 [2024-11-25 12:20:01.754219] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:21:05.742 [2024-11-25 12:20:01.754235] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:05.742 [2024-11-25 12:20:01.754879] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:05.742 [2024-11-25 12:20:01.754924] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:05.742 [2024-11-25 12:20:01.755048] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:05.742 [2024-11-25 12:20:01.755075] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:05.742 [2024-11-25 12:20:01.755100] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:05.742 [2024-11-25 12:20:01.755114] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:21:05.742 BaseBdev1 00:21:05.742 12:20:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.742 12:20:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:21:06.733 12:20:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:21:06.733 12:20:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:06.733 12:20:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:06.733 12:20:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:06.733 12:20:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:06.733 12:20:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:06.733 12:20:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:06.733 12:20:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:06.733 12:20:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:06.733 12:20:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:06.733 12:20:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:06.733 12:20:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:06.733 12:20:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.733 12:20:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:06.733 12:20:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.993 12:20:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:06.993 "name": "raid_bdev1", 00:21:06.993 "uuid": "90f360ba-8166-46ac-b16f-a3c40a621d48", 00:21:06.993 "strip_size_kb": 64, 00:21:06.993 "state": "online", 00:21:06.993 "raid_level": "raid5f", 00:21:06.993 "superblock": true, 00:21:06.993 "num_base_bdevs": 3, 00:21:06.993 "num_base_bdevs_discovered": 2, 00:21:06.993 "num_base_bdevs_operational": 2, 00:21:06.993 "base_bdevs_list": [ 00:21:06.993 { 00:21:06.993 "name": null, 00:21:06.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:06.993 "is_configured": false, 00:21:06.993 "data_offset": 0, 00:21:06.993 "data_size": 63488 00:21:06.993 }, 00:21:06.993 { 00:21:06.993 "name": "BaseBdev2", 00:21:06.993 "uuid": "ed7b23fe-d1db-5dab-8364-76c40fc1b5af", 00:21:06.993 "is_configured": true, 00:21:06.993 "data_offset": 2048, 00:21:06.993 "data_size": 63488 00:21:06.993 }, 00:21:06.993 { 00:21:06.993 "name": "BaseBdev3", 00:21:06.993 "uuid": "285f89d8-0f89-5d53-aef8-77f533be6f83", 00:21:06.993 "is_configured": true, 00:21:06.993 "data_offset": 2048, 00:21:06.993 "data_size": 63488 00:21:06.993 } 00:21:06.993 ] 00:21:06.993 }' 00:21:06.993 12:20:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:06.993 12:20:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:07.252 12:20:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:07.252 12:20:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:07.252 12:20:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:07.252 12:20:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:07.252 12:20:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:07.252 12:20:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:07.252 12:20:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:07.252 12:20:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.252 12:20:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:07.252 12:20:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.512 12:20:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:07.512 "name": "raid_bdev1", 00:21:07.512 "uuid": "90f360ba-8166-46ac-b16f-a3c40a621d48", 00:21:07.512 "strip_size_kb": 64, 00:21:07.512 "state": "online", 00:21:07.512 "raid_level": "raid5f", 00:21:07.512 "superblock": true, 00:21:07.512 "num_base_bdevs": 3, 00:21:07.512 "num_base_bdevs_discovered": 2, 00:21:07.512 "num_base_bdevs_operational": 2, 00:21:07.512 "base_bdevs_list": [ 00:21:07.512 { 00:21:07.512 "name": null, 00:21:07.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:07.512 "is_configured": false, 00:21:07.512 "data_offset": 0, 00:21:07.512 "data_size": 63488 00:21:07.512 }, 00:21:07.512 { 00:21:07.512 "name": "BaseBdev2", 00:21:07.512 "uuid": "ed7b23fe-d1db-5dab-8364-76c40fc1b5af", 00:21:07.512 "is_configured": true, 00:21:07.512 "data_offset": 2048, 00:21:07.512 "data_size": 63488 00:21:07.512 }, 00:21:07.512 { 00:21:07.512 "name": "BaseBdev3", 00:21:07.512 "uuid": "285f89d8-0f89-5d53-aef8-77f533be6f83", 00:21:07.512 "is_configured": true, 00:21:07.512 "data_offset": 2048, 00:21:07.512 "data_size": 63488 00:21:07.512 } 00:21:07.512 ] 00:21:07.512 }' 00:21:07.512 12:20:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:07.512 12:20:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:07.512 12:20:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:07.512 12:20:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:07.512 12:20:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:07.512 12:20:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:21:07.512 12:20:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:07.512 12:20:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:07.512 12:20:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:07.512 12:20:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:07.512 12:20:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:07.512 12:20:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:07.512 12:20:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.512 12:20:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:07.512 [2024-11-25 12:20:03.466747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:07.512 [2024-11-25 12:20:03.467008] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:07.512 [2024-11-25 12:20:03.467049] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:07.512 request: 00:21:07.512 { 00:21:07.512 "base_bdev": "BaseBdev1", 00:21:07.512 "raid_bdev": "raid_bdev1", 00:21:07.512 "method": "bdev_raid_add_base_bdev", 00:21:07.512 "req_id": 1 00:21:07.512 } 00:21:07.512 Got JSON-RPC error response 00:21:07.512 response: 00:21:07.512 { 00:21:07.512 "code": -22, 00:21:07.512 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:21:07.512 } 00:21:07.512 12:20:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:07.512 12:20:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:21:07.512 12:20:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:07.512 12:20:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:07.512 12:20:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:07.512 12:20:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:21:08.449 12:20:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:21:08.449 12:20:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:08.449 12:20:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:08.449 12:20:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:08.449 12:20:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:08.449 12:20:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:08.449 12:20:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:08.449 12:20:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:08.449 12:20:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:08.449 12:20:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:08.449 12:20:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:08.449 12:20:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:08.449 12:20:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.449 12:20:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:08.449 12:20:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.450 12:20:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:08.450 "name": "raid_bdev1", 00:21:08.450 "uuid": "90f360ba-8166-46ac-b16f-a3c40a621d48", 00:21:08.450 "strip_size_kb": 64, 00:21:08.450 "state": "online", 00:21:08.450 "raid_level": "raid5f", 00:21:08.450 "superblock": true, 00:21:08.450 "num_base_bdevs": 3, 00:21:08.450 "num_base_bdevs_discovered": 2, 00:21:08.450 "num_base_bdevs_operational": 2, 00:21:08.450 "base_bdevs_list": [ 00:21:08.450 { 00:21:08.450 "name": null, 00:21:08.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:08.450 "is_configured": false, 00:21:08.450 "data_offset": 0, 00:21:08.450 "data_size": 63488 00:21:08.450 }, 00:21:08.450 { 00:21:08.450 "name": "BaseBdev2", 00:21:08.450 "uuid": "ed7b23fe-d1db-5dab-8364-76c40fc1b5af", 00:21:08.450 "is_configured": true, 00:21:08.450 "data_offset": 2048, 00:21:08.450 "data_size": 63488 00:21:08.450 }, 00:21:08.450 { 00:21:08.450 "name": "BaseBdev3", 00:21:08.450 "uuid": "285f89d8-0f89-5d53-aef8-77f533be6f83", 00:21:08.450 "is_configured": true, 00:21:08.450 "data_offset": 2048, 00:21:08.450 "data_size": 63488 00:21:08.450 } 00:21:08.450 ] 00:21:08.450 }' 00:21:08.450 12:20:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:08.450 12:20:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.018 12:20:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:09.018 12:20:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:09.018 12:20:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:09.018 12:20:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:09.018 12:20:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:09.018 12:20:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:09.018 12:20:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.018 12:20:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.018 12:20:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:09.018 12:20:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.018 12:20:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:09.018 "name": "raid_bdev1", 00:21:09.018 "uuid": "90f360ba-8166-46ac-b16f-a3c40a621d48", 00:21:09.018 "strip_size_kb": 64, 00:21:09.018 "state": "online", 00:21:09.018 "raid_level": "raid5f", 00:21:09.018 "superblock": true, 00:21:09.018 "num_base_bdevs": 3, 00:21:09.018 "num_base_bdevs_discovered": 2, 00:21:09.018 "num_base_bdevs_operational": 2, 00:21:09.018 "base_bdevs_list": [ 00:21:09.018 { 00:21:09.018 "name": null, 00:21:09.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:09.018 "is_configured": false, 00:21:09.018 "data_offset": 0, 00:21:09.018 "data_size": 63488 00:21:09.018 }, 00:21:09.018 { 00:21:09.018 "name": "BaseBdev2", 00:21:09.018 "uuid": "ed7b23fe-d1db-5dab-8364-76c40fc1b5af", 00:21:09.018 "is_configured": true, 00:21:09.018 "data_offset": 2048, 00:21:09.018 "data_size": 63488 00:21:09.018 }, 00:21:09.018 { 00:21:09.018 "name": "BaseBdev3", 00:21:09.018 "uuid": "285f89d8-0f89-5d53-aef8-77f533be6f83", 00:21:09.018 "is_configured": true, 00:21:09.018 "data_offset": 2048, 00:21:09.018 "data_size": 63488 00:21:09.018 } 00:21:09.018 ] 00:21:09.018 }' 00:21:09.018 12:20:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:09.278 12:20:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:09.278 12:20:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:09.278 12:20:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:09.278 12:20:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82391 00:21:09.278 12:20:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 82391 ']' 00:21:09.278 12:20:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 82391 00:21:09.278 12:20:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:21:09.278 12:20:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:09.278 12:20:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82391 00:21:09.278 12:20:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:09.278 12:20:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:09.278 killing process with pid 82391 00:21:09.278 12:20:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82391' 00:21:09.278 Received shutdown signal, test time was about 60.000000 seconds 00:21:09.278 00:21:09.278 Latency(us) 00:21:09.278 [2024-11-25T12:20:05.369Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:09.278 [2024-11-25T12:20:05.369Z] =================================================================================================================== 00:21:09.278 [2024-11-25T12:20:05.369Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:09.278 12:20:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 82391 00:21:09.278 [2024-11-25 12:20:05.189555] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:09.278 [2024-11-25 12:20:05.189711] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:09.278 [2024-11-25 12:20:05.189809] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:09.278 [2024-11-25 12:20:05.189837] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:21:09.278 12:20:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 82391 00:21:09.537 [2024-11-25 12:20:05.549701] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:11.035 12:20:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:21:11.035 00:21:11.035 real 0m24.517s 00:21:11.035 user 0m32.483s 00:21:11.035 sys 0m2.552s 00:21:11.035 12:20:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:11.035 12:20:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.035 ************************************ 00:21:11.035 END TEST raid5f_rebuild_test_sb 00:21:11.035 ************************************ 00:21:11.035 12:20:06 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:21:11.035 12:20:06 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:21:11.035 12:20:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:11.035 12:20:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:11.035 12:20:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:11.035 ************************************ 00:21:11.035 START TEST raid5f_state_function_test 00:21:11.035 ************************************ 00:21:11.035 12:20:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:21:11.035 12:20:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:21:11.035 12:20:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:21:11.035 12:20:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:21:11.035 12:20:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:21:11.035 12:20:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:21:11.035 12:20:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:11.035 12:20:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:21:11.035 12:20:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:11.035 12:20:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:11.035 12:20:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:21:11.035 12:20:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:11.035 12:20:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:11.035 12:20:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:21:11.035 12:20:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:11.035 12:20:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:11.035 12:20:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:21:11.035 12:20:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:11.035 12:20:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:11.035 12:20:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:11.035 12:20:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:21:11.035 12:20:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:21:11.035 12:20:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:21:11.035 12:20:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:21:11.035 12:20:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:21:11.035 12:20:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:21:11.035 12:20:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:21:11.035 12:20:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:21:11.035 12:20:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:21:11.035 12:20:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:21:11.035 12:20:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83152 00:21:11.035 12:20:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:21:11.035 12:20:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83152' 00:21:11.035 Process raid pid: 83152 00:21:11.035 12:20:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83152 00:21:11.035 12:20:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 83152 ']' 00:21:11.035 12:20:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:11.035 12:20:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:11.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:11.035 12:20:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:11.036 12:20:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:11.036 12:20:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:11.036 [2024-11-25 12:20:06.809142] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:21:11.036 [2024-11-25 12:20:06.809360] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:11.036 [2024-11-25 12:20:07.000416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:11.314 [2024-11-25 12:20:07.133624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:11.314 [2024-11-25 12:20:07.340654] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:11.314 [2024-11-25 12:20:07.340724] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:11.882 12:20:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:11.882 12:20:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:21:11.883 12:20:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:11.883 12:20:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.883 12:20:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:11.883 [2024-11-25 12:20:07.799906] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:11.883 [2024-11-25 12:20:07.800017] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:11.883 [2024-11-25 12:20:07.800034] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:11.883 [2024-11-25 12:20:07.800051] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:11.883 [2024-11-25 12:20:07.800061] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:11.883 [2024-11-25 12:20:07.800076] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:11.883 [2024-11-25 12:20:07.800085] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:11.883 [2024-11-25 12:20:07.800099] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:11.883 12:20:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.883 12:20:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:11.883 12:20:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:11.883 12:20:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:11.883 12:20:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:11.883 12:20:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:11.883 12:20:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:11.883 12:20:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:11.883 12:20:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:11.883 12:20:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:11.883 12:20:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:11.883 12:20:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:11.883 12:20:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:11.883 12:20:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.883 12:20:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:11.883 12:20:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.883 12:20:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:11.883 "name": "Existed_Raid", 00:21:11.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:11.883 "strip_size_kb": 64, 00:21:11.883 "state": "configuring", 00:21:11.883 "raid_level": "raid5f", 00:21:11.883 "superblock": false, 00:21:11.883 "num_base_bdevs": 4, 00:21:11.883 "num_base_bdevs_discovered": 0, 00:21:11.883 "num_base_bdevs_operational": 4, 00:21:11.883 "base_bdevs_list": [ 00:21:11.883 { 00:21:11.883 "name": "BaseBdev1", 00:21:11.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:11.883 "is_configured": false, 00:21:11.883 "data_offset": 0, 00:21:11.883 "data_size": 0 00:21:11.883 }, 00:21:11.883 { 00:21:11.883 "name": "BaseBdev2", 00:21:11.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:11.883 "is_configured": false, 00:21:11.883 "data_offset": 0, 00:21:11.883 "data_size": 0 00:21:11.883 }, 00:21:11.883 { 00:21:11.883 "name": "BaseBdev3", 00:21:11.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:11.883 "is_configured": false, 00:21:11.883 "data_offset": 0, 00:21:11.883 "data_size": 0 00:21:11.883 }, 00:21:11.883 { 00:21:11.883 "name": "BaseBdev4", 00:21:11.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:11.883 "is_configured": false, 00:21:11.883 "data_offset": 0, 00:21:11.883 "data_size": 0 00:21:11.883 } 00:21:11.883 ] 00:21:11.883 }' 00:21:11.883 12:20:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:11.883 12:20:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.452 12:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:12.452 12:20:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.452 12:20:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.452 [2024-11-25 12:20:08.320213] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:12.452 [2024-11-25 12:20:08.320317] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:21:12.452 12:20:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.452 12:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:12.452 12:20:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.452 12:20:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.452 [2024-11-25 12:20:08.332185] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:12.452 [2024-11-25 12:20:08.332270] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:12.452 [2024-11-25 12:20:08.332302] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:12.452 [2024-11-25 12:20:08.332318] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:12.452 [2024-11-25 12:20:08.332328] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:12.452 [2024-11-25 12:20:08.332342] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:12.452 [2024-11-25 12:20:08.332352] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:12.453 [2024-11-25 12:20:08.332384] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:12.453 12:20:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.453 12:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:12.453 12:20:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.453 12:20:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.453 [2024-11-25 12:20:08.378926] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:12.453 BaseBdev1 00:21:12.453 12:20:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.453 12:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:21:12.453 12:20:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:21:12.453 12:20:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:12.453 12:20:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:12.453 12:20:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:12.453 12:20:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:12.453 12:20:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:12.453 12:20:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.453 12:20:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.453 12:20:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.453 12:20:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:12.453 12:20:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.453 12:20:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.453 [ 00:21:12.453 { 00:21:12.453 "name": "BaseBdev1", 00:21:12.453 "aliases": [ 00:21:12.453 "1a9e49e8-87c5-4b80-9744-f57c3ae70610" 00:21:12.453 ], 00:21:12.453 "product_name": "Malloc disk", 00:21:12.453 "block_size": 512, 00:21:12.453 "num_blocks": 65536, 00:21:12.453 "uuid": "1a9e49e8-87c5-4b80-9744-f57c3ae70610", 00:21:12.453 "assigned_rate_limits": { 00:21:12.453 "rw_ios_per_sec": 0, 00:21:12.453 "rw_mbytes_per_sec": 0, 00:21:12.453 "r_mbytes_per_sec": 0, 00:21:12.453 "w_mbytes_per_sec": 0 00:21:12.453 }, 00:21:12.453 "claimed": true, 00:21:12.453 "claim_type": "exclusive_write", 00:21:12.453 "zoned": false, 00:21:12.453 "supported_io_types": { 00:21:12.453 "read": true, 00:21:12.453 "write": true, 00:21:12.453 "unmap": true, 00:21:12.453 "flush": true, 00:21:12.453 "reset": true, 00:21:12.453 "nvme_admin": false, 00:21:12.453 "nvme_io": false, 00:21:12.453 "nvme_io_md": false, 00:21:12.453 "write_zeroes": true, 00:21:12.453 "zcopy": true, 00:21:12.453 "get_zone_info": false, 00:21:12.453 "zone_management": false, 00:21:12.453 "zone_append": false, 00:21:12.453 "compare": false, 00:21:12.453 "compare_and_write": false, 00:21:12.453 "abort": true, 00:21:12.453 "seek_hole": false, 00:21:12.453 "seek_data": false, 00:21:12.453 "copy": true, 00:21:12.453 "nvme_iov_md": false 00:21:12.453 }, 00:21:12.453 "memory_domains": [ 00:21:12.453 { 00:21:12.453 "dma_device_id": "system", 00:21:12.453 "dma_device_type": 1 00:21:12.453 }, 00:21:12.453 { 00:21:12.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:12.453 "dma_device_type": 2 00:21:12.453 } 00:21:12.453 ], 00:21:12.453 "driver_specific": {} 00:21:12.453 } 00:21:12.453 ] 00:21:12.453 12:20:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.453 12:20:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:12.453 12:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:12.453 12:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:12.453 12:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:12.453 12:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:12.453 12:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:12.453 12:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:12.453 12:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:12.453 12:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:12.453 12:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:12.453 12:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:12.453 12:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:12.453 12:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:12.453 12:20:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.453 12:20:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.453 12:20:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.453 12:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:12.453 "name": "Existed_Raid", 00:21:12.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:12.453 "strip_size_kb": 64, 00:21:12.453 "state": "configuring", 00:21:12.453 "raid_level": "raid5f", 00:21:12.453 "superblock": false, 00:21:12.453 "num_base_bdevs": 4, 00:21:12.453 "num_base_bdevs_discovered": 1, 00:21:12.453 "num_base_bdevs_operational": 4, 00:21:12.453 "base_bdevs_list": [ 00:21:12.453 { 00:21:12.453 "name": "BaseBdev1", 00:21:12.453 "uuid": "1a9e49e8-87c5-4b80-9744-f57c3ae70610", 00:21:12.453 "is_configured": true, 00:21:12.453 "data_offset": 0, 00:21:12.453 "data_size": 65536 00:21:12.453 }, 00:21:12.453 { 00:21:12.453 "name": "BaseBdev2", 00:21:12.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:12.453 "is_configured": false, 00:21:12.453 "data_offset": 0, 00:21:12.453 "data_size": 0 00:21:12.453 }, 00:21:12.453 { 00:21:12.453 "name": "BaseBdev3", 00:21:12.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:12.453 "is_configured": false, 00:21:12.453 "data_offset": 0, 00:21:12.453 "data_size": 0 00:21:12.453 }, 00:21:12.453 { 00:21:12.453 "name": "BaseBdev4", 00:21:12.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:12.453 "is_configured": false, 00:21:12.453 "data_offset": 0, 00:21:12.453 "data_size": 0 00:21:12.453 } 00:21:12.453 ] 00:21:12.453 }' 00:21:12.453 12:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:12.453 12:20:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:13.021 12:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:13.021 12:20:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.021 12:20:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:13.021 [2024-11-25 12:20:08.899154] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:13.021 [2024-11-25 12:20:08.899253] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:21:13.021 12:20:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.021 12:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:13.021 12:20:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.021 12:20:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:13.021 [2024-11-25 12:20:08.907196] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:13.021 [2024-11-25 12:20:08.909856] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:13.021 [2024-11-25 12:20:08.910038] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:13.021 [2024-11-25 12:20:08.910171] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:13.021 [2024-11-25 12:20:08.910235] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:13.021 [2024-11-25 12:20:08.910439] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:13.021 [2024-11-25 12:20:08.910517] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:13.021 12:20:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.021 12:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:21:13.021 12:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:13.021 12:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:13.021 12:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:13.021 12:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:13.021 12:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:13.021 12:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:13.021 12:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:13.021 12:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:13.021 12:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:13.021 12:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:13.021 12:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:13.021 12:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:13.021 12:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:13.021 12:20:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.021 12:20:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:13.021 12:20:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.021 12:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:13.021 "name": "Existed_Raid", 00:21:13.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:13.021 "strip_size_kb": 64, 00:21:13.021 "state": "configuring", 00:21:13.021 "raid_level": "raid5f", 00:21:13.021 "superblock": false, 00:21:13.021 "num_base_bdevs": 4, 00:21:13.021 "num_base_bdevs_discovered": 1, 00:21:13.021 "num_base_bdevs_operational": 4, 00:21:13.021 "base_bdevs_list": [ 00:21:13.021 { 00:21:13.021 "name": "BaseBdev1", 00:21:13.021 "uuid": "1a9e49e8-87c5-4b80-9744-f57c3ae70610", 00:21:13.021 "is_configured": true, 00:21:13.021 "data_offset": 0, 00:21:13.021 "data_size": 65536 00:21:13.021 }, 00:21:13.021 { 00:21:13.021 "name": "BaseBdev2", 00:21:13.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:13.021 "is_configured": false, 00:21:13.021 "data_offset": 0, 00:21:13.021 "data_size": 0 00:21:13.021 }, 00:21:13.021 { 00:21:13.021 "name": "BaseBdev3", 00:21:13.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:13.021 "is_configured": false, 00:21:13.021 "data_offset": 0, 00:21:13.021 "data_size": 0 00:21:13.021 }, 00:21:13.021 { 00:21:13.021 "name": "BaseBdev4", 00:21:13.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:13.021 "is_configured": false, 00:21:13.021 "data_offset": 0, 00:21:13.021 "data_size": 0 00:21:13.021 } 00:21:13.021 ] 00:21:13.021 }' 00:21:13.021 12:20:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:13.021 12:20:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:13.593 12:20:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:13.593 12:20:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.593 12:20:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:13.593 [2024-11-25 12:20:09.450550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:13.593 BaseBdev2 00:21:13.593 12:20:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.593 12:20:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:21:13.593 12:20:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:21:13.593 12:20:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:13.593 12:20:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:13.593 12:20:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:13.593 12:20:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:13.593 12:20:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:13.593 12:20:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.593 12:20:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:13.593 12:20:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.593 12:20:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:13.593 12:20:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.593 12:20:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:13.593 [ 00:21:13.593 { 00:21:13.593 "name": "BaseBdev2", 00:21:13.593 "aliases": [ 00:21:13.593 "a6cf95ba-899e-4372-8ebd-4a033cdc2a7a" 00:21:13.593 ], 00:21:13.593 "product_name": "Malloc disk", 00:21:13.593 "block_size": 512, 00:21:13.593 "num_blocks": 65536, 00:21:13.593 "uuid": "a6cf95ba-899e-4372-8ebd-4a033cdc2a7a", 00:21:13.593 "assigned_rate_limits": { 00:21:13.593 "rw_ios_per_sec": 0, 00:21:13.593 "rw_mbytes_per_sec": 0, 00:21:13.593 "r_mbytes_per_sec": 0, 00:21:13.594 "w_mbytes_per_sec": 0 00:21:13.594 }, 00:21:13.594 "claimed": true, 00:21:13.594 "claim_type": "exclusive_write", 00:21:13.594 "zoned": false, 00:21:13.594 "supported_io_types": { 00:21:13.594 "read": true, 00:21:13.594 "write": true, 00:21:13.594 "unmap": true, 00:21:13.594 "flush": true, 00:21:13.594 "reset": true, 00:21:13.594 "nvme_admin": false, 00:21:13.594 "nvme_io": false, 00:21:13.594 "nvme_io_md": false, 00:21:13.594 "write_zeroes": true, 00:21:13.594 "zcopy": true, 00:21:13.594 "get_zone_info": false, 00:21:13.594 "zone_management": false, 00:21:13.594 "zone_append": false, 00:21:13.594 "compare": false, 00:21:13.594 "compare_and_write": false, 00:21:13.594 "abort": true, 00:21:13.594 "seek_hole": false, 00:21:13.594 "seek_data": false, 00:21:13.594 "copy": true, 00:21:13.594 "nvme_iov_md": false 00:21:13.594 }, 00:21:13.594 "memory_domains": [ 00:21:13.594 { 00:21:13.594 "dma_device_id": "system", 00:21:13.594 "dma_device_type": 1 00:21:13.594 }, 00:21:13.594 { 00:21:13.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:13.594 "dma_device_type": 2 00:21:13.594 } 00:21:13.594 ], 00:21:13.594 "driver_specific": {} 00:21:13.594 } 00:21:13.594 ] 00:21:13.594 12:20:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.594 12:20:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:13.594 12:20:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:13.594 12:20:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:13.594 12:20:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:13.594 12:20:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:13.594 12:20:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:13.594 12:20:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:13.594 12:20:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:13.594 12:20:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:13.594 12:20:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:13.594 12:20:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:13.594 12:20:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:13.594 12:20:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:13.594 12:20:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:13.594 12:20:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.594 12:20:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:13.594 12:20:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:13.594 12:20:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.594 12:20:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:13.594 "name": "Existed_Raid", 00:21:13.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:13.594 "strip_size_kb": 64, 00:21:13.594 "state": "configuring", 00:21:13.594 "raid_level": "raid5f", 00:21:13.594 "superblock": false, 00:21:13.594 "num_base_bdevs": 4, 00:21:13.594 "num_base_bdevs_discovered": 2, 00:21:13.594 "num_base_bdevs_operational": 4, 00:21:13.594 "base_bdevs_list": [ 00:21:13.594 { 00:21:13.594 "name": "BaseBdev1", 00:21:13.594 "uuid": "1a9e49e8-87c5-4b80-9744-f57c3ae70610", 00:21:13.594 "is_configured": true, 00:21:13.594 "data_offset": 0, 00:21:13.594 "data_size": 65536 00:21:13.594 }, 00:21:13.594 { 00:21:13.594 "name": "BaseBdev2", 00:21:13.594 "uuid": "a6cf95ba-899e-4372-8ebd-4a033cdc2a7a", 00:21:13.594 "is_configured": true, 00:21:13.594 "data_offset": 0, 00:21:13.594 "data_size": 65536 00:21:13.594 }, 00:21:13.594 { 00:21:13.594 "name": "BaseBdev3", 00:21:13.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:13.594 "is_configured": false, 00:21:13.594 "data_offset": 0, 00:21:13.594 "data_size": 0 00:21:13.594 }, 00:21:13.594 { 00:21:13.594 "name": "BaseBdev4", 00:21:13.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:13.594 "is_configured": false, 00:21:13.594 "data_offset": 0, 00:21:13.594 "data_size": 0 00:21:13.594 } 00:21:13.594 ] 00:21:13.594 }' 00:21:13.594 12:20:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:13.594 12:20:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.161 12:20:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:21:14.161 12:20:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.161 12:20:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.161 [2024-11-25 12:20:10.041528] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:14.161 BaseBdev3 00:21:14.161 12:20:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.161 12:20:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:21:14.161 12:20:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:21:14.161 12:20:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:14.161 12:20:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:14.161 12:20:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:14.161 12:20:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:14.161 12:20:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:14.161 12:20:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.161 12:20:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.161 12:20:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.161 12:20:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:14.161 12:20:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.161 12:20:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.161 [ 00:21:14.161 { 00:21:14.161 "name": "BaseBdev3", 00:21:14.161 "aliases": [ 00:21:14.161 "7621fa67-6362-47ca-8e3e-c8cc461ff720" 00:21:14.161 ], 00:21:14.161 "product_name": "Malloc disk", 00:21:14.161 "block_size": 512, 00:21:14.161 "num_blocks": 65536, 00:21:14.161 "uuid": "7621fa67-6362-47ca-8e3e-c8cc461ff720", 00:21:14.161 "assigned_rate_limits": { 00:21:14.161 "rw_ios_per_sec": 0, 00:21:14.161 "rw_mbytes_per_sec": 0, 00:21:14.161 "r_mbytes_per_sec": 0, 00:21:14.161 "w_mbytes_per_sec": 0 00:21:14.161 }, 00:21:14.161 "claimed": true, 00:21:14.161 "claim_type": "exclusive_write", 00:21:14.161 "zoned": false, 00:21:14.161 "supported_io_types": { 00:21:14.161 "read": true, 00:21:14.161 "write": true, 00:21:14.161 "unmap": true, 00:21:14.161 "flush": true, 00:21:14.161 "reset": true, 00:21:14.161 "nvme_admin": false, 00:21:14.161 "nvme_io": false, 00:21:14.161 "nvme_io_md": false, 00:21:14.161 "write_zeroes": true, 00:21:14.161 "zcopy": true, 00:21:14.161 "get_zone_info": false, 00:21:14.161 "zone_management": false, 00:21:14.161 "zone_append": false, 00:21:14.161 "compare": false, 00:21:14.161 "compare_and_write": false, 00:21:14.161 "abort": true, 00:21:14.161 "seek_hole": false, 00:21:14.161 "seek_data": false, 00:21:14.161 "copy": true, 00:21:14.161 "nvme_iov_md": false 00:21:14.161 }, 00:21:14.161 "memory_domains": [ 00:21:14.161 { 00:21:14.161 "dma_device_id": "system", 00:21:14.161 "dma_device_type": 1 00:21:14.161 }, 00:21:14.161 { 00:21:14.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:14.161 "dma_device_type": 2 00:21:14.161 } 00:21:14.161 ], 00:21:14.161 "driver_specific": {} 00:21:14.161 } 00:21:14.161 ] 00:21:14.161 12:20:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.161 12:20:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:14.161 12:20:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:14.161 12:20:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:14.161 12:20:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:14.161 12:20:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:14.161 12:20:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:14.161 12:20:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:14.161 12:20:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:14.161 12:20:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:14.161 12:20:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:14.161 12:20:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:14.161 12:20:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:14.161 12:20:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:14.161 12:20:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:14.161 12:20:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:14.161 12:20:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.161 12:20:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.161 12:20:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.161 12:20:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:14.161 "name": "Existed_Raid", 00:21:14.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:14.161 "strip_size_kb": 64, 00:21:14.161 "state": "configuring", 00:21:14.161 "raid_level": "raid5f", 00:21:14.161 "superblock": false, 00:21:14.161 "num_base_bdevs": 4, 00:21:14.161 "num_base_bdevs_discovered": 3, 00:21:14.161 "num_base_bdevs_operational": 4, 00:21:14.161 "base_bdevs_list": [ 00:21:14.161 { 00:21:14.161 "name": "BaseBdev1", 00:21:14.161 "uuid": "1a9e49e8-87c5-4b80-9744-f57c3ae70610", 00:21:14.161 "is_configured": true, 00:21:14.161 "data_offset": 0, 00:21:14.161 "data_size": 65536 00:21:14.161 }, 00:21:14.161 { 00:21:14.161 "name": "BaseBdev2", 00:21:14.161 "uuid": "a6cf95ba-899e-4372-8ebd-4a033cdc2a7a", 00:21:14.161 "is_configured": true, 00:21:14.161 "data_offset": 0, 00:21:14.161 "data_size": 65536 00:21:14.161 }, 00:21:14.161 { 00:21:14.161 "name": "BaseBdev3", 00:21:14.161 "uuid": "7621fa67-6362-47ca-8e3e-c8cc461ff720", 00:21:14.161 "is_configured": true, 00:21:14.161 "data_offset": 0, 00:21:14.161 "data_size": 65536 00:21:14.162 }, 00:21:14.162 { 00:21:14.162 "name": "BaseBdev4", 00:21:14.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:14.162 "is_configured": false, 00:21:14.162 "data_offset": 0, 00:21:14.162 "data_size": 0 00:21:14.162 } 00:21:14.162 ] 00:21:14.162 }' 00:21:14.162 12:20:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:14.162 12:20:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.743 12:20:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:21:14.743 12:20:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.743 12:20:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.743 [2024-11-25 12:20:10.634286] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:14.743 [2024-11-25 12:20:10.634413] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:14.743 [2024-11-25 12:20:10.634430] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:21:14.743 [2024-11-25 12:20:10.634821] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:14.743 [2024-11-25 12:20:10.641711] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:14.743 [2024-11-25 12:20:10.641744] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:21:14.743 [2024-11-25 12:20:10.642086] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:14.743 BaseBdev4 00:21:14.743 12:20:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.743 12:20:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:21:14.743 12:20:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:21:14.743 12:20:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:14.743 12:20:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:14.743 12:20:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:14.743 12:20:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:14.743 12:20:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:14.743 12:20:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.743 12:20:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.743 12:20:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.743 12:20:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:14.743 12:20:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.743 12:20:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.743 [ 00:21:14.743 { 00:21:14.743 "name": "BaseBdev4", 00:21:14.743 "aliases": [ 00:21:14.743 "34902f69-19e1-4477-b20f-db54c633e4fd" 00:21:14.743 ], 00:21:14.743 "product_name": "Malloc disk", 00:21:14.743 "block_size": 512, 00:21:14.743 "num_blocks": 65536, 00:21:14.743 "uuid": "34902f69-19e1-4477-b20f-db54c633e4fd", 00:21:14.743 "assigned_rate_limits": { 00:21:14.743 "rw_ios_per_sec": 0, 00:21:14.743 "rw_mbytes_per_sec": 0, 00:21:14.743 "r_mbytes_per_sec": 0, 00:21:14.743 "w_mbytes_per_sec": 0 00:21:14.743 }, 00:21:14.743 "claimed": true, 00:21:14.743 "claim_type": "exclusive_write", 00:21:14.743 "zoned": false, 00:21:14.743 "supported_io_types": { 00:21:14.743 "read": true, 00:21:14.743 "write": true, 00:21:14.743 "unmap": true, 00:21:14.743 "flush": true, 00:21:14.743 "reset": true, 00:21:14.743 "nvme_admin": false, 00:21:14.743 "nvme_io": false, 00:21:14.743 "nvme_io_md": false, 00:21:14.743 "write_zeroes": true, 00:21:14.743 "zcopy": true, 00:21:14.743 "get_zone_info": false, 00:21:14.743 "zone_management": false, 00:21:14.743 "zone_append": false, 00:21:14.743 "compare": false, 00:21:14.743 "compare_and_write": false, 00:21:14.743 "abort": true, 00:21:14.743 "seek_hole": false, 00:21:14.743 "seek_data": false, 00:21:14.743 "copy": true, 00:21:14.743 "nvme_iov_md": false 00:21:14.743 }, 00:21:14.743 "memory_domains": [ 00:21:14.743 { 00:21:14.743 "dma_device_id": "system", 00:21:14.743 "dma_device_type": 1 00:21:14.743 }, 00:21:14.743 { 00:21:14.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:14.743 "dma_device_type": 2 00:21:14.743 } 00:21:14.743 ], 00:21:14.743 "driver_specific": {} 00:21:14.743 } 00:21:14.743 ] 00:21:14.743 12:20:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.743 12:20:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:14.743 12:20:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:14.743 12:20:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:14.743 12:20:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:21:14.743 12:20:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:14.743 12:20:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:14.743 12:20:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:14.743 12:20:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:14.743 12:20:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:14.743 12:20:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:14.743 12:20:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:14.743 12:20:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:14.743 12:20:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:14.743 12:20:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:14.743 12:20:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.744 12:20:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:14.744 12:20:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.744 12:20:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.744 12:20:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:14.744 "name": "Existed_Raid", 00:21:14.744 "uuid": "6a7fd8b7-c959-45d6-9929-d39f502c2455", 00:21:14.744 "strip_size_kb": 64, 00:21:14.744 "state": "online", 00:21:14.744 "raid_level": "raid5f", 00:21:14.744 "superblock": false, 00:21:14.744 "num_base_bdevs": 4, 00:21:14.744 "num_base_bdevs_discovered": 4, 00:21:14.744 "num_base_bdevs_operational": 4, 00:21:14.744 "base_bdevs_list": [ 00:21:14.744 { 00:21:14.744 "name": "BaseBdev1", 00:21:14.744 "uuid": "1a9e49e8-87c5-4b80-9744-f57c3ae70610", 00:21:14.744 "is_configured": true, 00:21:14.744 "data_offset": 0, 00:21:14.744 "data_size": 65536 00:21:14.744 }, 00:21:14.744 { 00:21:14.744 "name": "BaseBdev2", 00:21:14.744 "uuid": "a6cf95ba-899e-4372-8ebd-4a033cdc2a7a", 00:21:14.744 "is_configured": true, 00:21:14.744 "data_offset": 0, 00:21:14.744 "data_size": 65536 00:21:14.744 }, 00:21:14.744 { 00:21:14.744 "name": "BaseBdev3", 00:21:14.744 "uuid": "7621fa67-6362-47ca-8e3e-c8cc461ff720", 00:21:14.744 "is_configured": true, 00:21:14.744 "data_offset": 0, 00:21:14.744 "data_size": 65536 00:21:14.744 }, 00:21:14.744 { 00:21:14.744 "name": "BaseBdev4", 00:21:14.744 "uuid": "34902f69-19e1-4477-b20f-db54c633e4fd", 00:21:14.744 "is_configured": true, 00:21:14.744 "data_offset": 0, 00:21:14.744 "data_size": 65536 00:21:14.744 } 00:21:14.744 ] 00:21:14.744 }' 00:21:14.744 12:20:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:14.744 12:20:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.311 12:20:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:21:15.311 12:20:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:15.311 12:20:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:15.311 12:20:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:15.311 12:20:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:15.311 12:20:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:15.311 12:20:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:15.311 12:20:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.311 12:20:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.311 12:20:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:15.311 [2024-11-25 12:20:11.234102] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:15.311 12:20:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.311 12:20:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:15.311 "name": "Existed_Raid", 00:21:15.311 "aliases": [ 00:21:15.311 "6a7fd8b7-c959-45d6-9929-d39f502c2455" 00:21:15.311 ], 00:21:15.311 "product_name": "Raid Volume", 00:21:15.311 "block_size": 512, 00:21:15.311 "num_blocks": 196608, 00:21:15.311 "uuid": "6a7fd8b7-c959-45d6-9929-d39f502c2455", 00:21:15.311 "assigned_rate_limits": { 00:21:15.311 "rw_ios_per_sec": 0, 00:21:15.311 "rw_mbytes_per_sec": 0, 00:21:15.311 "r_mbytes_per_sec": 0, 00:21:15.311 "w_mbytes_per_sec": 0 00:21:15.311 }, 00:21:15.311 "claimed": false, 00:21:15.311 "zoned": false, 00:21:15.311 "supported_io_types": { 00:21:15.311 "read": true, 00:21:15.311 "write": true, 00:21:15.311 "unmap": false, 00:21:15.311 "flush": false, 00:21:15.311 "reset": true, 00:21:15.311 "nvme_admin": false, 00:21:15.311 "nvme_io": false, 00:21:15.311 "nvme_io_md": false, 00:21:15.311 "write_zeroes": true, 00:21:15.311 "zcopy": false, 00:21:15.311 "get_zone_info": false, 00:21:15.311 "zone_management": false, 00:21:15.311 "zone_append": false, 00:21:15.311 "compare": false, 00:21:15.311 "compare_and_write": false, 00:21:15.311 "abort": false, 00:21:15.311 "seek_hole": false, 00:21:15.311 "seek_data": false, 00:21:15.311 "copy": false, 00:21:15.311 "nvme_iov_md": false 00:21:15.311 }, 00:21:15.311 "driver_specific": { 00:21:15.311 "raid": { 00:21:15.311 "uuid": "6a7fd8b7-c959-45d6-9929-d39f502c2455", 00:21:15.311 "strip_size_kb": 64, 00:21:15.311 "state": "online", 00:21:15.311 "raid_level": "raid5f", 00:21:15.311 "superblock": false, 00:21:15.311 "num_base_bdevs": 4, 00:21:15.311 "num_base_bdevs_discovered": 4, 00:21:15.311 "num_base_bdevs_operational": 4, 00:21:15.311 "base_bdevs_list": [ 00:21:15.311 { 00:21:15.311 "name": "BaseBdev1", 00:21:15.311 "uuid": "1a9e49e8-87c5-4b80-9744-f57c3ae70610", 00:21:15.311 "is_configured": true, 00:21:15.311 "data_offset": 0, 00:21:15.311 "data_size": 65536 00:21:15.311 }, 00:21:15.311 { 00:21:15.311 "name": "BaseBdev2", 00:21:15.311 "uuid": "a6cf95ba-899e-4372-8ebd-4a033cdc2a7a", 00:21:15.311 "is_configured": true, 00:21:15.311 "data_offset": 0, 00:21:15.311 "data_size": 65536 00:21:15.311 }, 00:21:15.311 { 00:21:15.311 "name": "BaseBdev3", 00:21:15.311 "uuid": "7621fa67-6362-47ca-8e3e-c8cc461ff720", 00:21:15.311 "is_configured": true, 00:21:15.311 "data_offset": 0, 00:21:15.311 "data_size": 65536 00:21:15.311 }, 00:21:15.311 { 00:21:15.311 "name": "BaseBdev4", 00:21:15.312 "uuid": "34902f69-19e1-4477-b20f-db54c633e4fd", 00:21:15.312 "is_configured": true, 00:21:15.312 "data_offset": 0, 00:21:15.312 "data_size": 65536 00:21:15.312 } 00:21:15.312 ] 00:21:15.312 } 00:21:15.312 } 00:21:15.312 }' 00:21:15.312 12:20:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:15.312 12:20:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:21:15.312 BaseBdev2 00:21:15.312 BaseBdev3 00:21:15.312 BaseBdev4' 00:21:15.312 12:20:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:15.312 12:20:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:15.312 12:20:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:15.571 12:20:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:21:15.571 12:20:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:15.571 12:20:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.571 12:20:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.571 12:20:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.571 12:20:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:15.571 12:20:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:15.571 12:20:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:15.571 12:20:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:15.571 12:20:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.571 12:20:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.571 12:20:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:15.571 12:20:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.571 12:20:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:15.571 12:20:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:15.571 12:20:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:15.571 12:20:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:21:15.571 12:20:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:15.571 12:20:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.571 12:20:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.571 12:20:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.571 12:20:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:15.571 12:20:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:15.571 12:20:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:15.571 12:20:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:21:15.571 12:20:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:15.571 12:20:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.571 12:20:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.571 12:20:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.571 12:20:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:15.571 12:20:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:15.571 12:20:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:15.571 12:20:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.571 12:20:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.571 [2024-11-25 12:20:11.625994] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:15.830 12:20:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.830 12:20:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:21:15.830 12:20:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:21:15.830 12:20:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:15.830 12:20:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:21:15.830 12:20:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:21:15.830 12:20:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:21:15.830 12:20:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:15.830 12:20:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:15.830 12:20:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:15.830 12:20:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:15.830 12:20:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:15.830 12:20:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:15.830 12:20:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:15.830 12:20:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:15.830 12:20:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:15.830 12:20:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:15.830 12:20:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.830 12:20:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:15.830 12:20:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.830 12:20:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.830 12:20:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:15.830 "name": "Existed_Raid", 00:21:15.830 "uuid": "6a7fd8b7-c959-45d6-9929-d39f502c2455", 00:21:15.830 "strip_size_kb": 64, 00:21:15.830 "state": "online", 00:21:15.830 "raid_level": "raid5f", 00:21:15.830 "superblock": false, 00:21:15.830 "num_base_bdevs": 4, 00:21:15.830 "num_base_bdevs_discovered": 3, 00:21:15.830 "num_base_bdevs_operational": 3, 00:21:15.830 "base_bdevs_list": [ 00:21:15.830 { 00:21:15.830 "name": null, 00:21:15.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:15.830 "is_configured": false, 00:21:15.830 "data_offset": 0, 00:21:15.830 "data_size": 65536 00:21:15.830 }, 00:21:15.830 { 00:21:15.830 "name": "BaseBdev2", 00:21:15.830 "uuid": "a6cf95ba-899e-4372-8ebd-4a033cdc2a7a", 00:21:15.830 "is_configured": true, 00:21:15.830 "data_offset": 0, 00:21:15.830 "data_size": 65536 00:21:15.830 }, 00:21:15.830 { 00:21:15.830 "name": "BaseBdev3", 00:21:15.830 "uuid": "7621fa67-6362-47ca-8e3e-c8cc461ff720", 00:21:15.830 "is_configured": true, 00:21:15.830 "data_offset": 0, 00:21:15.830 "data_size": 65536 00:21:15.830 }, 00:21:15.830 { 00:21:15.830 "name": "BaseBdev4", 00:21:15.830 "uuid": "34902f69-19e1-4477-b20f-db54c633e4fd", 00:21:15.830 "is_configured": true, 00:21:15.830 "data_offset": 0, 00:21:15.830 "data_size": 65536 00:21:15.830 } 00:21:15.830 ] 00:21:15.830 }' 00:21:15.830 12:20:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:15.830 12:20:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.398 12:20:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:21:16.398 12:20:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:16.398 12:20:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:16.398 12:20:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:16.398 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.398 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.398 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.398 12:20:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:16.398 12:20:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:16.398 12:20:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:21:16.398 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.398 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.398 [2024-11-25 12:20:12.285489] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:16.398 [2024-11-25 12:20:12.285612] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:16.398 [2024-11-25 12:20:12.373579] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:16.398 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.398 12:20:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:16.398 12:20:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:16.398 12:20:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:16.398 12:20:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:16.398 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.398 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.398 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.398 12:20:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:16.398 12:20:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:16.398 12:20:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:21:16.398 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.398 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.398 [2024-11-25 12:20:12.437651] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:16.657 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.657 12:20:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:16.657 12:20:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:16.657 12:20:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:16.657 12:20:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:16.657 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.657 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.657 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.657 12:20:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:16.657 12:20:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:16.657 12:20:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:21:16.657 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.657 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.657 [2024-11-25 12:20:12.587108] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:21:16.657 [2024-11-25 12:20:12.587319] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:21:16.657 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.657 12:20:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:16.657 12:20:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:16.657 12:20:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:16.657 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.657 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.657 12:20:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:21:16.657 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.657 12:20:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:21:16.657 12:20:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:21:16.657 12:20:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:21:16.657 12:20:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:21:16.657 12:20:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:16.657 12:20:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:16.657 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.657 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.916 BaseBdev2 00:21:16.916 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.916 12:20:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:21:16.916 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:21:16.916 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:16.916 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:16.916 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:16.916 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:16.916 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:16.916 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.916 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.916 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.916 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:16.916 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.916 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.916 [ 00:21:16.916 { 00:21:16.916 "name": "BaseBdev2", 00:21:16.916 "aliases": [ 00:21:16.916 "3b7c0bca-3306-4b23-9dee-740b954a60ee" 00:21:16.916 ], 00:21:16.916 "product_name": "Malloc disk", 00:21:16.916 "block_size": 512, 00:21:16.916 "num_blocks": 65536, 00:21:16.916 "uuid": "3b7c0bca-3306-4b23-9dee-740b954a60ee", 00:21:16.916 "assigned_rate_limits": { 00:21:16.916 "rw_ios_per_sec": 0, 00:21:16.916 "rw_mbytes_per_sec": 0, 00:21:16.916 "r_mbytes_per_sec": 0, 00:21:16.916 "w_mbytes_per_sec": 0 00:21:16.916 }, 00:21:16.916 "claimed": false, 00:21:16.916 "zoned": false, 00:21:16.916 "supported_io_types": { 00:21:16.916 "read": true, 00:21:16.916 "write": true, 00:21:16.916 "unmap": true, 00:21:16.916 "flush": true, 00:21:16.916 "reset": true, 00:21:16.916 "nvme_admin": false, 00:21:16.916 "nvme_io": false, 00:21:16.916 "nvme_io_md": false, 00:21:16.916 "write_zeroes": true, 00:21:16.916 "zcopy": true, 00:21:16.916 "get_zone_info": false, 00:21:16.916 "zone_management": false, 00:21:16.916 "zone_append": false, 00:21:16.916 "compare": false, 00:21:16.916 "compare_and_write": false, 00:21:16.916 "abort": true, 00:21:16.916 "seek_hole": false, 00:21:16.916 "seek_data": false, 00:21:16.916 "copy": true, 00:21:16.916 "nvme_iov_md": false 00:21:16.916 }, 00:21:16.916 "memory_domains": [ 00:21:16.916 { 00:21:16.916 "dma_device_id": "system", 00:21:16.916 "dma_device_type": 1 00:21:16.916 }, 00:21:16.916 { 00:21:16.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:16.916 "dma_device_type": 2 00:21:16.916 } 00:21:16.917 ], 00:21:16.917 "driver_specific": {} 00:21:16.917 } 00:21:16.917 ] 00:21:16.917 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.917 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:16.917 12:20:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:16.917 12:20:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:16.917 12:20:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:21:16.917 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.917 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.917 BaseBdev3 00:21:16.917 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.917 12:20:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:21:16.917 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:21:16.917 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:16.917 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:16.917 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:16.917 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:16.917 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:16.917 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.917 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.917 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.917 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:16.917 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.917 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.917 [ 00:21:16.917 { 00:21:16.917 "name": "BaseBdev3", 00:21:16.917 "aliases": [ 00:21:16.917 "fa098956-541a-413e-8a32-27e0290e0bca" 00:21:16.917 ], 00:21:16.917 "product_name": "Malloc disk", 00:21:16.917 "block_size": 512, 00:21:16.917 "num_blocks": 65536, 00:21:16.917 "uuid": "fa098956-541a-413e-8a32-27e0290e0bca", 00:21:16.917 "assigned_rate_limits": { 00:21:16.917 "rw_ios_per_sec": 0, 00:21:16.917 "rw_mbytes_per_sec": 0, 00:21:16.917 "r_mbytes_per_sec": 0, 00:21:16.917 "w_mbytes_per_sec": 0 00:21:16.917 }, 00:21:16.917 "claimed": false, 00:21:16.917 "zoned": false, 00:21:16.917 "supported_io_types": { 00:21:16.917 "read": true, 00:21:16.917 "write": true, 00:21:16.917 "unmap": true, 00:21:16.917 "flush": true, 00:21:16.917 "reset": true, 00:21:16.917 "nvme_admin": false, 00:21:16.917 "nvme_io": false, 00:21:16.917 "nvme_io_md": false, 00:21:16.917 "write_zeroes": true, 00:21:16.917 "zcopy": true, 00:21:16.917 "get_zone_info": false, 00:21:16.917 "zone_management": false, 00:21:16.917 "zone_append": false, 00:21:16.917 "compare": false, 00:21:16.917 "compare_and_write": false, 00:21:16.917 "abort": true, 00:21:16.917 "seek_hole": false, 00:21:16.917 "seek_data": false, 00:21:16.917 "copy": true, 00:21:16.917 "nvme_iov_md": false 00:21:16.917 }, 00:21:16.917 "memory_domains": [ 00:21:16.917 { 00:21:16.917 "dma_device_id": "system", 00:21:16.917 "dma_device_type": 1 00:21:16.917 }, 00:21:16.917 { 00:21:16.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:16.917 "dma_device_type": 2 00:21:16.917 } 00:21:16.917 ], 00:21:16.917 "driver_specific": {} 00:21:16.917 } 00:21:16.917 ] 00:21:16.917 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.917 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:16.917 12:20:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:16.917 12:20:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:16.917 12:20:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:21:16.917 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.917 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.917 BaseBdev4 00:21:16.917 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.917 12:20:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:21:16.917 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:21:16.917 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:16.917 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:16.917 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:16.917 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:16.917 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:16.917 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.917 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.917 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.917 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:16.917 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.917 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.917 [ 00:21:16.917 { 00:21:16.917 "name": "BaseBdev4", 00:21:16.917 "aliases": [ 00:21:16.917 "b6d97fef-b436-4828-8406-b8523f29d2e1" 00:21:16.917 ], 00:21:16.917 "product_name": "Malloc disk", 00:21:16.917 "block_size": 512, 00:21:16.917 "num_blocks": 65536, 00:21:16.917 "uuid": "b6d97fef-b436-4828-8406-b8523f29d2e1", 00:21:16.917 "assigned_rate_limits": { 00:21:16.917 "rw_ios_per_sec": 0, 00:21:16.917 "rw_mbytes_per_sec": 0, 00:21:16.917 "r_mbytes_per_sec": 0, 00:21:16.917 "w_mbytes_per_sec": 0 00:21:16.917 }, 00:21:16.917 "claimed": false, 00:21:16.917 "zoned": false, 00:21:16.917 "supported_io_types": { 00:21:16.917 "read": true, 00:21:16.917 "write": true, 00:21:16.917 "unmap": true, 00:21:16.917 "flush": true, 00:21:16.917 "reset": true, 00:21:16.917 "nvme_admin": false, 00:21:16.917 "nvme_io": false, 00:21:16.917 "nvme_io_md": false, 00:21:16.917 "write_zeroes": true, 00:21:16.917 "zcopy": true, 00:21:16.917 "get_zone_info": false, 00:21:16.917 "zone_management": false, 00:21:16.917 "zone_append": false, 00:21:16.917 "compare": false, 00:21:16.917 "compare_and_write": false, 00:21:16.917 "abort": true, 00:21:16.917 "seek_hole": false, 00:21:16.917 "seek_data": false, 00:21:16.917 "copy": true, 00:21:16.917 "nvme_iov_md": false 00:21:16.917 }, 00:21:16.917 "memory_domains": [ 00:21:16.917 { 00:21:16.917 "dma_device_id": "system", 00:21:16.917 "dma_device_type": 1 00:21:16.917 }, 00:21:16.917 { 00:21:16.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:16.917 "dma_device_type": 2 00:21:16.917 } 00:21:16.918 ], 00:21:16.918 "driver_specific": {} 00:21:16.918 } 00:21:16.918 ] 00:21:16.918 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.918 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:16.918 12:20:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:16.918 12:20:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:16.918 12:20:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:16.918 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.918 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.918 [2024-11-25 12:20:12.992564] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:16.918 [2024-11-25 12:20:12.992629] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:16.918 [2024-11-25 12:20:12.992679] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:16.918 [2024-11-25 12:20:12.995248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:16.918 [2024-11-25 12:20:12.995328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:16.918 12:20:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.918 12:20:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:16.918 12:20:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:16.918 12:20:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:16.918 12:20:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:16.918 12:20:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:16.918 12:20:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:16.918 12:20:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:16.918 12:20:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:16.918 12:20:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:16.918 12:20:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:16.918 12:20:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:16.918 12:20:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:16.918 12:20:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.918 12:20:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.176 12:20:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.176 12:20:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:17.176 "name": "Existed_Raid", 00:21:17.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:17.176 "strip_size_kb": 64, 00:21:17.176 "state": "configuring", 00:21:17.176 "raid_level": "raid5f", 00:21:17.176 "superblock": false, 00:21:17.176 "num_base_bdevs": 4, 00:21:17.176 "num_base_bdevs_discovered": 3, 00:21:17.176 "num_base_bdevs_operational": 4, 00:21:17.176 "base_bdevs_list": [ 00:21:17.176 { 00:21:17.176 "name": "BaseBdev1", 00:21:17.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:17.176 "is_configured": false, 00:21:17.176 "data_offset": 0, 00:21:17.176 "data_size": 0 00:21:17.176 }, 00:21:17.176 { 00:21:17.176 "name": "BaseBdev2", 00:21:17.176 "uuid": "3b7c0bca-3306-4b23-9dee-740b954a60ee", 00:21:17.176 "is_configured": true, 00:21:17.176 "data_offset": 0, 00:21:17.176 "data_size": 65536 00:21:17.176 }, 00:21:17.176 { 00:21:17.176 "name": "BaseBdev3", 00:21:17.176 "uuid": "fa098956-541a-413e-8a32-27e0290e0bca", 00:21:17.176 "is_configured": true, 00:21:17.176 "data_offset": 0, 00:21:17.176 "data_size": 65536 00:21:17.176 }, 00:21:17.176 { 00:21:17.176 "name": "BaseBdev4", 00:21:17.176 "uuid": "b6d97fef-b436-4828-8406-b8523f29d2e1", 00:21:17.176 "is_configured": true, 00:21:17.176 "data_offset": 0, 00:21:17.176 "data_size": 65536 00:21:17.176 } 00:21:17.176 ] 00:21:17.176 }' 00:21:17.176 12:20:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:17.176 12:20:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.434 12:20:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:21:17.693 12:20:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.693 12:20:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.693 [2024-11-25 12:20:13.528786] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:17.693 12:20:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.693 12:20:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:17.693 12:20:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:17.693 12:20:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:17.693 12:20:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:17.693 12:20:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:17.693 12:20:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:17.693 12:20:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:17.693 12:20:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:17.693 12:20:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:17.693 12:20:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:17.693 12:20:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.693 12:20:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:17.693 12:20:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.693 12:20:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.693 12:20:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.693 12:20:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:17.693 "name": "Existed_Raid", 00:21:17.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:17.693 "strip_size_kb": 64, 00:21:17.693 "state": "configuring", 00:21:17.693 "raid_level": "raid5f", 00:21:17.693 "superblock": false, 00:21:17.693 "num_base_bdevs": 4, 00:21:17.693 "num_base_bdevs_discovered": 2, 00:21:17.693 "num_base_bdevs_operational": 4, 00:21:17.693 "base_bdevs_list": [ 00:21:17.693 { 00:21:17.693 "name": "BaseBdev1", 00:21:17.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:17.693 "is_configured": false, 00:21:17.693 "data_offset": 0, 00:21:17.693 "data_size": 0 00:21:17.693 }, 00:21:17.693 { 00:21:17.693 "name": null, 00:21:17.693 "uuid": "3b7c0bca-3306-4b23-9dee-740b954a60ee", 00:21:17.693 "is_configured": false, 00:21:17.693 "data_offset": 0, 00:21:17.693 "data_size": 65536 00:21:17.693 }, 00:21:17.693 { 00:21:17.693 "name": "BaseBdev3", 00:21:17.693 "uuid": "fa098956-541a-413e-8a32-27e0290e0bca", 00:21:17.693 "is_configured": true, 00:21:17.693 "data_offset": 0, 00:21:17.693 "data_size": 65536 00:21:17.693 }, 00:21:17.693 { 00:21:17.693 "name": "BaseBdev4", 00:21:17.693 "uuid": "b6d97fef-b436-4828-8406-b8523f29d2e1", 00:21:17.693 "is_configured": true, 00:21:17.693 "data_offset": 0, 00:21:17.693 "data_size": 65536 00:21:17.693 } 00:21:17.693 ] 00:21:17.693 }' 00:21:17.693 12:20:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:17.693 12:20:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.262 12:20:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:18.262 12:20:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.262 12:20:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:18.262 12:20:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.262 12:20:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.262 12:20:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:21:18.262 12:20:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:18.262 12:20:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.262 12:20:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.262 [2024-11-25 12:20:14.181344] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:18.262 BaseBdev1 00:21:18.262 12:20:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.262 12:20:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:21:18.262 12:20:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:21:18.262 12:20:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:18.262 12:20:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:18.262 12:20:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:18.262 12:20:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:18.262 12:20:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:18.262 12:20:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.262 12:20:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.262 12:20:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.262 12:20:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:18.262 12:20:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.262 12:20:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.262 [ 00:21:18.262 { 00:21:18.262 "name": "BaseBdev1", 00:21:18.262 "aliases": [ 00:21:18.262 "0cf45eab-22b1-445e-8cc1-c5fb54211dcc" 00:21:18.262 ], 00:21:18.262 "product_name": "Malloc disk", 00:21:18.262 "block_size": 512, 00:21:18.262 "num_blocks": 65536, 00:21:18.262 "uuid": "0cf45eab-22b1-445e-8cc1-c5fb54211dcc", 00:21:18.262 "assigned_rate_limits": { 00:21:18.262 "rw_ios_per_sec": 0, 00:21:18.262 "rw_mbytes_per_sec": 0, 00:21:18.262 "r_mbytes_per_sec": 0, 00:21:18.262 "w_mbytes_per_sec": 0 00:21:18.262 }, 00:21:18.262 "claimed": true, 00:21:18.262 "claim_type": "exclusive_write", 00:21:18.262 "zoned": false, 00:21:18.262 "supported_io_types": { 00:21:18.262 "read": true, 00:21:18.262 "write": true, 00:21:18.262 "unmap": true, 00:21:18.262 "flush": true, 00:21:18.262 "reset": true, 00:21:18.262 "nvme_admin": false, 00:21:18.262 "nvme_io": false, 00:21:18.262 "nvme_io_md": false, 00:21:18.262 "write_zeroes": true, 00:21:18.262 "zcopy": true, 00:21:18.262 "get_zone_info": false, 00:21:18.262 "zone_management": false, 00:21:18.262 "zone_append": false, 00:21:18.262 "compare": false, 00:21:18.262 "compare_and_write": false, 00:21:18.262 "abort": true, 00:21:18.262 "seek_hole": false, 00:21:18.262 "seek_data": false, 00:21:18.262 "copy": true, 00:21:18.262 "nvme_iov_md": false 00:21:18.262 }, 00:21:18.262 "memory_domains": [ 00:21:18.262 { 00:21:18.262 "dma_device_id": "system", 00:21:18.262 "dma_device_type": 1 00:21:18.262 }, 00:21:18.262 { 00:21:18.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:18.262 "dma_device_type": 2 00:21:18.262 } 00:21:18.262 ], 00:21:18.262 "driver_specific": {} 00:21:18.262 } 00:21:18.262 ] 00:21:18.262 12:20:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.262 12:20:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:18.262 12:20:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:18.262 12:20:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:18.262 12:20:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:18.262 12:20:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:18.262 12:20:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:18.262 12:20:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:18.262 12:20:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:18.262 12:20:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:18.262 12:20:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:18.262 12:20:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:18.262 12:20:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:18.262 12:20:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:18.262 12:20:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.262 12:20:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.262 12:20:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.262 12:20:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:18.262 "name": "Existed_Raid", 00:21:18.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:18.262 "strip_size_kb": 64, 00:21:18.262 "state": "configuring", 00:21:18.262 "raid_level": "raid5f", 00:21:18.262 "superblock": false, 00:21:18.262 "num_base_bdevs": 4, 00:21:18.262 "num_base_bdevs_discovered": 3, 00:21:18.262 "num_base_bdevs_operational": 4, 00:21:18.262 "base_bdevs_list": [ 00:21:18.262 { 00:21:18.262 "name": "BaseBdev1", 00:21:18.262 "uuid": "0cf45eab-22b1-445e-8cc1-c5fb54211dcc", 00:21:18.262 "is_configured": true, 00:21:18.262 "data_offset": 0, 00:21:18.262 "data_size": 65536 00:21:18.262 }, 00:21:18.262 { 00:21:18.262 "name": null, 00:21:18.262 "uuid": "3b7c0bca-3306-4b23-9dee-740b954a60ee", 00:21:18.262 "is_configured": false, 00:21:18.262 "data_offset": 0, 00:21:18.262 "data_size": 65536 00:21:18.262 }, 00:21:18.262 { 00:21:18.262 "name": "BaseBdev3", 00:21:18.262 "uuid": "fa098956-541a-413e-8a32-27e0290e0bca", 00:21:18.262 "is_configured": true, 00:21:18.262 "data_offset": 0, 00:21:18.262 "data_size": 65536 00:21:18.262 }, 00:21:18.262 { 00:21:18.262 "name": "BaseBdev4", 00:21:18.262 "uuid": "b6d97fef-b436-4828-8406-b8523f29d2e1", 00:21:18.262 "is_configured": true, 00:21:18.262 "data_offset": 0, 00:21:18.262 "data_size": 65536 00:21:18.262 } 00:21:18.262 ] 00:21:18.262 }' 00:21:18.262 12:20:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:18.262 12:20:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.829 12:20:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:18.829 12:20:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:18.829 12:20:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.829 12:20:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.829 12:20:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.829 12:20:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:21:18.829 12:20:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:21:18.829 12:20:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.829 12:20:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.829 [2024-11-25 12:20:14.805706] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:18.830 12:20:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.830 12:20:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:18.830 12:20:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:18.830 12:20:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:18.830 12:20:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:18.830 12:20:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:18.830 12:20:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:18.830 12:20:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:18.830 12:20:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:18.830 12:20:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:18.830 12:20:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:18.830 12:20:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:18.830 12:20:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:18.830 12:20:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.830 12:20:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.830 12:20:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.830 12:20:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:18.830 "name": "Existed_Raid", 00:21:18.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:18.830 "strip_size_kb": 64, 00:21:18.830 "state": "configuring", 00:21:18.830 "raid_level": "raid5f", 00:21:18.830 "superblock": false, 00:21:18.830 "num_base_bdevs": 4, 00:21:18.830 "num_base_bdevs_discovered": 2, 00:21:18.830 "num_base_bdevs_operational": 4, 00:21:18.830 "base_bdevs_list": [ 00:21:18.830 { 00:21:18.830 "name": "BaseBdev1", 00:21:18.830 "uuid": "0cf45eab-22b1-445e-8cc1-c5fb54211dcc", 00:21:18.830 "is_configured": true, 00:21:18.830 "data_offset": 0, 00:21:18.830 "data_size": 65536 00:21:18.830 }, 00:21:18.830 { 00:21:18.830 "name": null, 00:21:18.830 "uuid": "3b7c0bca-3306-4b23-9dee-740b954a60ee", 00:21:18.830 "is_configured": false, 00:21:18.830 "data_offset": 0, 00:21:18.830 "data_size": 65536 00:21:18.830 }, 00:21:18.830 { 00:21:18.830 "name": null, 00:21:18.830 "uuid": "fa098956-541a-413e-8a32-27e0290e0bca", 00:21:18.830 "is_configured": false, 00:21:18.830 "data_offset": 0, 00:21:18.830 "data_size": 65536 00:21:18.830 }, 00:21:18.830 { 00:21:18.830 "name": "BaseBdev4", 00:21:18.830 "uuid": "b6d97fef-b436-4828-8406-b8523f29d2e1", 00:21:18.830 "is_configured": true, 00:21:18.830 "data_offset": 0, 00:21:18.830 "data_size": 65536 00:21:18.830 } 00:21:18.830 ] 00:21:18.830 }' 00:21:18.830 12:20:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:18.830 12:20:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.397 12:20:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:19.397 12:20:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.397 12:20:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.397 12:20:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:19.397 12:20:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.397 12:20:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:21:19.397 12:20:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:21:19.397 12:20:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.397 12:20:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.397 [2024-11-25 12:20:15.381884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:19.397 12:20:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.397 12:20:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:19.397 12:20:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:19.397 12:20:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:19.397 12:20:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:19.397 12:20:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:19.397 12:20:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:19.397 12:20:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:19.397 12:20:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:19.397 12:20:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:19.397 12:20:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:19.397 12:20:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:19.397 12:20:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:19.397 12:20:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.397 12:20:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.397 12:20:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.397 12:20:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:19.397 "name": "Existed_Raid", 00:21:19.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:19.397 "strip_size_kb": 64, 00:21:19.397 "state": "configuring", 00:21:19.397 "raid_level": "raid5f", 00:21:19.397 "superblock": false, 00:21:19.397 "num_base_bdevs": 4, 00:21:19.397 "num_base_bdevs_discovered": 3, 00:21:19.397 "num_base_bdevs_operational": 4, 00:21:19.397 "base_bdevs_list": [ 00:21:19.397 { 00:21:19.397 "name": "BaseBdev1", 00:21:19.397 "uuid": "0cf45eab-22b1-445e-8cc1-c5fb54211dcc", 00:21:19.397 "is_configured": true, 00:21:19.397 "data_offset": 0, 00:21:19.397 "data_size": 65536 00:21:19.397 }, 00:21:19.397 { 00:21:19.397 "name": null, 00:21:19.397 "uuid": "3b7c0bca-3306-4b23-9dee-740b954a60ee", 00:21:19.397 "is_configured": false, 00:21:19.397 "data_offset": 0, 00:21:19.397 "data_size": 65536 00:21:19.397 }, 00:21:19.397 { 00:21:19.397 "name": "BaseBdev3", 00:21:19.397 "uuid": "fa098956-541a-413e-8a32-27e0290e0bca", 00:21:19.397 "is_configured": true, 00:21:19.397 "data_offset": 0, 00:21:19.397 "data_size": 65536 00:21:19.397 }, 00:21:19.397 { 00:21:19.397 "name": "BaseBdev4", 00:21:19.397 "uuid": "b6d97fef-b436-4828-8406-b8523f29d2e1", 00:21:19.397 "is_configured": true, 00:21:19.397 "data_offset": 0, 00:21:19.397 "data_size": 65536 00:21:19.397 } 00:21:19.397 ] 00:21:19.397 }' 00:21:19.397 12:20:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:19.397 12:20:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.964 12:20:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:19.964 12:20:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:19.964 12:20:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.964 12:20:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.964 12:20:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.964 12:20:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:21:19.964 12:20:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:19.964 12:20:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.964 12:20:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.964 [2024-11-25 12:20:15.962019] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:19.964 12:20:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.964 12:20:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:19.964 12:20:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:19.964 12:20:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:19.964 12:20:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:19.964 12:20:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:19.964 12:20:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:19.964 12:20:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:20.223 12:20:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:20.223 12:20:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:20.223 12:20:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:20.223 12:20:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:20.223 12:20:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:20.223 12:20:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.223 12:20:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.223 12:20:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.223 12:20:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:20.223 "name": "Existed_Raid", 00:21:20.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:20.223 "strip_size_kb": 64, 00:21:20.223 "state": "configuring", 00:21:20.223 "raid_level": "raid5f", 00:21:20.223 "superblock": false, 00:21:20.223 "num_base_bdevs": 4, 00:21:20.223 "num_base_bdevs_discovered": 2, 00:21:20.223 "num_base_bdevs_operational": 4, 00:21:20.223 "base_bdevs_list": [ 00:21:20.223 { 00:21:20.223 "name": null, 00:21:20.223 "uuid": "0cf45eab-22b1-445e-8cc1-c5fb54211dcc", 00:21:20.223 "is_configured": false, 00:21:20.223 "data_offset": 0, 00:21:20.223 "data_size": 65536 00:21:20.223 }, 00:21:20.223 { 00:21:20.223 "name": null, 00:21:20.223 "uuid": "3b7c0bca-3306-4b23-9dee-740b954a60ee", 00:21:20.223 "is_configured": false, 00:21:20.223 "data_offset": 0, 00:21:20.223 "data_size": 65536 00:21:20.223 }, 00:21:20.223 { 00:21:20.223 "name": "BaseBdev3", 00:21:20.223 "uuid": "fa098956-541a-413e-8a32-27e0290e0bca", 00:21:20.223 "is_configured": true, 00:21:20.223 "data_offset": 0, 00:21:20.223 "data_size": 65536 00:21:20.223 }, 00:21:20.223 { 00:21:20.223 "name": "BaseBdev4", 00:21:20.223 "uuid": "b6d97fef-b436-4828-8406-b8523f29d2e1", 00:21:20.223 "is_configured": true, 00:21:20.223 "data_offset": 0, 00:21:20.223 "data_size": 65536 00:21:20.223 } 00:21:20.223 ] 00:21:20.223 }' 00:21:20.223 12:20:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:20.223 12:20:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.789 12:20:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:20.789 12:20:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.789 12:20:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.789 12:20:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:20.789 12:20:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.789 12:20:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:21:20.789 12:20:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:21:20.789 12:20:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.789 12:20:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.789 [2024-11-25 12:20:16.645376] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:20.789 12:20:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.789 12:20:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:20.789 12:20:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:20.789 12:20:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:20.789 12:20:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:20.789 12:20:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:20.789 12:20:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:20.789 12:20:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:20.789 12:20:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:20.789 12:20:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:20.789 12:20:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:20.789 12:20:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:20.789 12:20:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:20.789 12:20:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.789 12:20:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.789 12:20:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.789 12:20:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:20.789 "name": "Existed_Raid", 00:21:20.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:20.789 "strip_size_kb": 64, 00:21:20.789 "state": "configuring", 00:21:20.789 "raid_level": "raid5f", 00:21:20.789 "superblock": false, 00:21:20.789 "num_base_bdevs": 4, 00:21:20.789 "num_base_bdevs_discovered": 3, 00:21:20.789 "num_base_bdevs_operational": 4, 00:21:20.789 "base_bdevs_list": [ 00:21:20.789 { 00:21:20.789 "name": null, 00:21:20.789 "uuid": "0cf45eab-22b1-445e-8cc1-c5fb54211dcc", 00:21:20.789 "is_configured": false, 00:21:20.789 "data_offset": 0, 00:21:20.789 "data_size": 65536 00:21:20.789 }, 00:21:20.789 { 00:21:20.789 "name": "BaseBdev2", 00:21:20.789 "uuid": "3b7c0bca-3306-4b23-9dee-740b954a60ee", 00:21:20.789 "is_configured": true, 00:21:20.789 "data_offset": 0, 00:21:20.789 "data_size": 65536 00:21:20.789 }, 00:21:20.789 { 00:21:20.790 "name": "BaseBdev3", 00:21:20.790 "uuid": "fa098956-541a-413e-8a32-27e0290e0bca", 00:21:20.790 "is_configured": true, 00:21:20.790 "data_offset": 0, 00:21:20.790 "data_size": 65536 00:21:20.790 }, 00:21:20.790 { 00:21:20.790 "name": "BaseBdev4", 00:21:20.790 "uuid": "b6d97fef-b436-4828-8406-b8523f29d2e1", 00:21:20.790 "is_configured": true, 00:21:20.790 "data_offset": 0, 00:21:20.790 "data_size": 65536 00:21:20.790 } 00:21:20.790 ] 00:21:20.790 }' 00:21:20.790 12:20:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:20.790 12:20:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.356 12:20:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:21.357 12:20:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.357 12:20:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.357 12:20:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:21.357 12:20:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.357 12:20:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:21:21.357 12:20:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:21.357 12:20:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:21:21.357 12:20:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.357 12:20:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.357 12:20:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.357 12:20:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0cf45eab-22b1-445e-8cc1-c5fb54211dcc 00:21:21.357 12:20:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.357 12:20:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.357 [2024-11-25 12:20:17.348741] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:21:21.357 [2024-11-25 12:20:17.348819] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:21.357 [2024-11-25 12:20:17.348833] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:21:21.357 [2024-11-25 12:20:17.349163] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:21:21.357 [2024-11-25 12:20:17.355801] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:21.357 [2024-11-25 12:20:17.355836] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:21:21.357 [2024-11-25 12:20:17.356174] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:21.357 NewBaseBdev 00:21:21.357 12:20:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.357 12:20:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:21:21.357 12:20:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:21:21.357 12:20:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:21.357 12:20:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:21.357 12:20:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:21.357 12:20:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:21.357 12:20:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:21.357 12:20:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.357 12:20:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.357 12:20:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.357 12:20:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:21:21.357 12:20:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.357 12:20:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.357 [ 00:21:21.357 { 00:21:21.357 "name": "NewBaseBdev", 00:21:21.357 "aliases": [ 00:21:21.357 "0cf45eab-22b1-445e-8cc1-c5fb54211dcc" 00:21:21.357 ], 00:21:21.357 "product_name": "Malloc disk", 00:21:21.357 "block_size": 512, 00:21:21.357 "num_blocks": 65536, 00:21:21.357 "uuid": "0cf45eab-22b1-445e-8cc1-c5fb54211dcc", 00:21:21.357 "assigned_rate_limits": { 00:21:21.357 "rw_ios_per_sec": 0, 00:21:21.357 "rw_mbytes_per_sec": 0, 00:21:21.357 "r_mbytes_per_sec": 0, 00:21:21.357 "w_mbytes_per_sec": 0 00:21:21.357 }, 00:21:21.357 "claimed": true, 00:21:21.357 "claim_type": "exclusive_write", 00:21:21.357 "zoned": false, 00:21:21.357 "supported_io_types": { 00:21:21.357 "read": true, 00:21:21.357 "write": true, 00:21:21.357 "unmap": true, 00:21:21.357 "flush": true, 00:21:21.357 "reset": true, 00:21:21.357 "nvme_admin": false, 00:21:21.357 "nvme_io": false, 00:21:21.357 "nvme_io_md": false, 00:21:21.357 "write_zeroes": true, 00:21:21.357 "zcopy": true, 00:21:21.357 "get_zone_info": false, 00:21:21.357 "zone_management": false, 00:21:21.357 "zone_append": false, 00:21:21.357 "compare": false, 00:21:21.357 "compare_and_write": false, 00:21:21.357 "abort": true, 00:21:21.357 "seek_hole": false, 00:21:21.357 "seek_data": false, 00:21:21.357 "copy": true, 00:21:21.357 "nvme_iov_md": false 00:21:21.357 }, 00:21:21.357 "memory_domains": [ 00:21:21.357 { 00:21:21.357 "dma_device_id": "system", 00:21:21.357 "dma_device_type": 1 00:21:21.357 }, 00:21:21.357 { 00:21:21.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:21.357 "dma_device_type": 2 00:21:21.357 } 00:21:21.357 ], 00:21:21.357 "driver_specific": {} 00:21:21.357 } 00:21:21.357 ] 00:21:21.357 12:20:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.357 12:20:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:21.357 12:20:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:21:21.357 12:20:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:21.357 12:20:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:21.357 12:20:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:21.357 12:20:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:21.357 12:20:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:21.357 12:20:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:21.357 12:20:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:21.357 12:20:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:21.357 12:20:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:21.357 12:20:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:21.357 12:20:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:21.357 12:20:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.357 12:20:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.357 12:20:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.616 12:20:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:21.616 "name": "Existed_Raid", 00:21:21.616 "uuid": "5aac0c90-b6e6-429c-a309-5c5b10cfec05", 00:21:21.616 "strip_size_kb": 64, 00:21:21.616 "state": "online", 00:21:21.616 "raid_level": "raid5f", 00:21:21.616 "superblock": false, 00:21:21.616 "num_base_bdevs": 4, 00:21:21.616 "num_base_bdevs_discovered": 4, 00:21:21.616 "num_base_bdevs_operational": 4, 00:21:21.616 "base_bdevs_list": [ 00:21:21.616 { 00:21:21.616 "name": "NewBaseBdev", 00:21:21.616 "uuid": "0cf45eab-22b1-445e-8cc1-c5fb54211dcc", 00:21:21.616 "is_configured": true, 00:21:21.616 "data_offset": 0, 00:21:21.616 "data_size": 65536 00:21:21.616 }, 00:21:21.616 { 00:21:21.616 "name": "BaseBdev2", 00:21:21.616 "uuid": "3b7c0bca-3306-4b23-9dee-740b954a60ee", 00:21:21.616 "is_configured": true, 00:21:21.616 "data_offset": 0, 00:21:21.616 "data_size": 65536 00:21:21.616 }, 00:21:21.617 { 00:21:21.617 "name": "BaseBdev3", 00:21:21.617 "uuid": "fa098956-541a-413e-8a32-27e0290e0bca", 00:21:21.617 "is_configured": true, 00:21:21.617 "data_offset": 0, 00:21:21.617 "data_size": 65536 00:21:21.617 }, 00:21:21.617 { 00:21:21.617 "name": "BaseBdev4", 00:21:21.617 "uuid": "b6d97fef-b436-4828-8406-b8523f29d2e1", 00:21:21.617 "is_configured": true, 00:21:21.617 "data_offset": 0, 00:21:21.617 "data_size": 65536 00:21:21.617 } 00:21:21.617 ] 00:21:21.617 }' 00:21:21.617 12:20:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:21.617 12:20:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.875 12:20:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:21:21.875 12:20:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:21.875 12:20:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:21.875 12:20:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:21.875 12:20:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:21.875 12:20:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:21.875 12:20:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:21.875 12:20:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:21.875 12:20:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.875 12:20:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.875 [2024-11-25 12:20:17.936712] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:21.875 12:20:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.140 12:20:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:22.140 "name": "Existed_Raid", 00:21:22.140 "aliases": [ 00:21:22.140 "5aac0c90-b6e6-429c-a309-5c5b10cfec05" 00:21:22.140 ], 00:21:22.140 "product_name": "Raid Volume", 00:21:22.140 "block_size": 512, 00:21:22.140 "num_blocks": 196608, 00:21:22.140 "uuid": "5aac0c90-b6e6-429c-a309-5c5b10cfec05", 00:21:22.140 "assigned_rate_limits": { 00:21:22.140 "rw_ios_per_sec": 0, 00:21:22.140 "rw_mbytes_per_sec": 0, 00:21:22.140 "r_mbytes_per_sec": 0, 00:21:22.140 "w_mbytes_per_sec": 0 00:21:22.140 }, 00:21:22.140 "claimed": false, 00:21:22.140 "zoned": false, 00:21:22.140 "supported_io_types": { 00:21:22.140 "read": true, 00:21:22.140 "write": true, 00:21:22.140 "unmap": false, 00:21:22.140 "flush": false, 00:21:22.140 "reset": true, 00:21:22.140 "nvme_admin": false, 00:21:22.140 "nvme_io": false, 00:21:22.140 "nvme_io_md": false, 00:21:22.140 "write_zeroes": true, 00:21:22.140 "zcopy": false, 00:21:22.140 "get_zone_info": false, 00:21:22.140 "zone_management": false, 00:21:22.140 "zone_append": false, 00:21:22.140 "compare": false, 00:21:22.140 "compare_and_write": false, 00:21:22.140 "abort": false, 00:21:22.140 "seek_hole": false, 00:21:22.140 "seek_data": false, 00:21:22.140 "copy": false, 00:21:22.140 "nvme_iov_md": false 00:21:22.140 }, 00:21:22.140 "driver_specific": { 00:21:22.140 "raid": { 00:21:22.140 "uuid": "5aac0c90-b6e6-429c-a309-5c5b10cfec05", 00:21:22.140 "strip_size_kb": 64, 00:21:22.140 "state": "online", 00:21:22.140 "raid_level": "raid5f", 00:21:22.140 "superblock": false, 00:21:22.140 "num_base_bdevs": 4, 00:21:22.140 "num_base_bdevs_discovered": 4, 00:21:22.140 "num_base_bdevs_operational": 4, 00:21:22.140 "base_bdevs_list": [ 00:21:22.140 { 00:21:22.140 "name": "NewBaseBdev", 00:21:22.140 "uuid": "0cf45eab-22b1-445e-8cc1-c5fb54211dcc", 00:21:22.140 "is_configured": true, 00:21:22.140 "data_offset": 0, 00:21:22.140 "data_size": 65536 00:21:22.140 }, 00:21:22.140 { 00:21:22.140 "name": "BaseBdev2", 00:21:22.140 "uuid": "3b7c0bca-3306-4b23-9dee-740b954a60ee", 00:21:22.140 "is_configured": true, 00:21:22.140 "data_offset": 0, 00:21:22.140 "data_size": 65536 00:21:22.140 }, 00:21:22.140 { 00:21:22.140 "name": "BaseBdev3", 00:21:22.140 "uuid": "fa098956-541a-413e-8a32-27e0290e0bca", 00:21:22.140 "is_configured": true, 00:21:22.140 "data_offset": 0, 00:21:22.140 "data_size": 65536 00:21:22.140 }, 00:21:22.140 { 00:21:22.140 "name": "BaseBdev4", 00:21:22.140 "uuid": "b6d97fef-b436-4828-8406-b8523f29d2e1", 00:21:22.140 "is_configured": true, 00:21:22.140 "data_offset": 0, 00:21:22.140 "data_size": 65536 00:21:22.140 } 00:21:22.140 ] 00:21:22.140 } 00:21:22.140 } 00:21:22.140 }' 00:21:22.140 12:20:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:22.140 12:20:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:21:22.140 BaseBdev2 00:21:22.140 BaseBdev3 00:21:22.140 BaseBdev4' 00:21:22.140 12:20:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:22.140 12:20:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:22.140 12:20:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:22.140 12:20:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:21:22.140 12:20:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.140 12:20:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.140 12:20:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:22.140 12:20:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.140 12:20:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:22.140 12:20:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:22.140 12:20:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:22.140 12:20:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:22.140 12:20:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:22.140 12:20:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.140 12:20:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.140 12:20:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.140 12:20:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:22.140 12:20:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:22.140 12:20:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:22.140 12:20:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:21:22.140 12:20:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:22.140 12:20:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.140 12:20:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.398 12:20:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.399 12:20:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:22.399 12:20:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:22.399 12:20:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:22.399 12:20:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:21:22.399 12:20:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:22.399 12:20:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.399 12:20:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.399 12:20:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.399 12:20:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:22.399 12:20:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:22.399 12:20:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:22.399 12:20:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.399 12:20:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.399 [2024-11-25 12:20:18.332840] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:22.399 [2024-11-25 12:20:18.332919] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:22.399 [2024-11-25 12:20:18.333044] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:22.399 [2024-11-25 12:20:18.333548] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:22.399 [2024-11-25 12:20:18.333579] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:21:22.399 12:20:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.399 12:20:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83152 00:21:22.399 12:20:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 83152 ']' 00:21:22.399 12:20:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 83152 00:21:22.399 12:20:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:21:22.399 12:20:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:22.399 12:20:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83152 00:21:22.399 12:20:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:22.399 12:20:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:22.399 killing process with pid 83152 00:21:22.399 12:20:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83152' 00:21:22.399 12:20:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 83152 00:21:22.399 [2024-11-25 12:20:18.367781] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:22.399 12:20:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 83152 00:21:22.658 [2024-11-25 12:20:18.729680] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:24.038 12:20:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:21:24.038 00:21:24.038 real 0m13.127s 00:21:24.038 user 0m21.607s 00:21:24.038 sys 0m1.909s 00:21:24.038 12:20:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:24.038 12:20:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:24.038 ************************************ 00:21:24.038 END TEST raid5f_state_function_test 00:21:24.038 ************************************ 00:21:24.038 12:20:19 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:21:24.038 12:20:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:24.038 12:20:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:24.038 12:20:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:24.038 ************************************ 00:21:24.038 START TEST raid5f_state_function_test_sb 00:21:24.038 ************************************ 00:21:24.038 12:20:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:21:24.038 12:20:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:21:24.038 12:20:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:21:24.038 12:20:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:21:24.038 12:20:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:21:24.038 12:20:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:21:24.038 12:20:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:24.038 12:20:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:21:24.038 12:20:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:24.038 12:20:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:24.038 12:20:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:21:24.038 12:20:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:24.038 12:20:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:24.038 12:20:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:21:24.038 12:20:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:24.038 12:20:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:24.038 12:20:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:21:24.038 12:20:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:24.038 12:20:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:24.038 12:20:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:24.038 12:20:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:21:24.038 12:20:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:21:24.038 12:20:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:21:24.038 12:20:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:21:24.038 12:20:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:21:24.038 12:20:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:21:24.038 12:20:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:21:24.038 12:20:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:21:24.038 12:20:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:21:24.038 12:20:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:21:24.038 12:20:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83835 00:21:24.038 12:20:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:21:24.038 Process raid pid: 83835 00:21:24.038 12:20:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83835' 00:21:24.038 12:20:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83835 00:21:24.038 12:20:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 83835 ']' 00:21:24.038 12:20:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:24.038 12:20:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:24.038 12:20:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:24.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:24.038 12:20:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:24.038 12:20:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:24.038 [2024-11-25 12:20:19.953218] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:21:24.038 [2024-11-25 12:20:19.953420] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:24.297 [2024-11-25 12:20:20.129975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.297 [2024-11-25 12:20:20.258142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:24.556 [2024-11-25 12:20:20.472133] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:24.556 [2024-11-25 12:20:20.472197] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:25.125 12:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:25.125 12:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:21:25.125 12:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:25.125 12:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.125 12:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:25.125 [2024-11-25 12:20:21.032593] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:25.125 [2024-11-25 12:20:21.032661] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:25.125 [2024-11-25 12:20:21.032679] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:25.125 [2024-11-25 12:20:21.032696] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:25.125 [2024-11-25 12:20:21.032706] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:25.125 [2024-11-25 12:20:21.032721] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:25.125 [2024-11-25 12:20:21.032731] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:25.125 [2024-11-25 12:20:21.032760] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:25.125 12:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.125 12:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:25.125 12:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:25.125 12:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:25.125 12:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:25.125 12:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:25.125 12:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:25.125 12:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:25.125 12:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:25.125 12:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:25.125 12:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:25.125 12:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:25.125 12:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.125 12:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:25.125 12:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:25.125 12:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.125 12:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:25.125 "name": "Existed_Raid", 00:21:25.125 "uuid": "55c0f1b3-815c-447d-95c3-b8af8cb62a58", 00:21:25.125 "strip_size_kb": 64, 00:21:25.125 "state": "configuring", 00:21:25.125 "raid_level": "raid5f", 00:21:25.125 "superblock": true, 00:21:25.125 "num_base_bdevs": 4, 00:21:25.125 "num_base_bdevs_discovered": 0, 00:21:25.125 "num_base_bdevs_operational": 4, 00:21:25.125 "base_bdevs_list": [ 00:21:25.125 { 00:21:25.125 "name": "BaseBdev1", 00:21:25.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.125 "is_configured": false, 00:21:25.125 "data_offset": 0, 00:21:25.125 "data_size": 0 00:21:25.125 }, 00:21:25.125 { 00:21:25.125 "name": "BaseBdev2", 00:21:25.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.125 "is_configured": false, 00:21:25.125 "data_offset": 0, 00:21:25.125 "data_size": 0 00:21:25.125 }, 00:21:25.125 { 00:21:25.125 "name": "BaseBdev3", 00:21:25.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.125 "is_configured": false, 00:21:25.125 "data_offset": 0, 00:21:25.125 "data_size": 0 00:21:25.125 }, 00:21:25.125 { 00:21:25.125 "name": "BaseBdev4", 00:21:25.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.125 "is_configured": false, 00:21:25.125 "data_offset": 0, 00:21:25.125 "data_size": 0 00:21:25.125 } 00:21:25.125 ] 00:21:25.125 }' 00:21:25.125 12:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:25.125 12:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:25.699 12:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:25.699 12:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.699 12:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:25.699 [2024-11-25 12:20:21.576665] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:25.699 [2024-11-25 12:20:21.576717] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:21:25.699 12:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.699 12:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:25.699 12:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.699 12:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:25.699 [2024-11-25 12:20:21.588671] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:25.699 [2024-11-25 12:20:21.588726] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:25.699 [2024-11-25 12:20:21.588741] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:25.699 [2024-11-25 12:20:21.588758] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:25.699 [2024-11-25 12:20:21.588768] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:25.699 [2024-11-25 12:20:21.588782] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:25.699 [2024-11-25 12:20:21.588792] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:25.699 [2024-11-25 12:20:21.588806] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:25.700 12:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.700 12:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:25.700 12:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.700 12:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:25.700 [2024-11-25 12:20:21.639545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:25.700 BaseBdev1 00:21:25.700 12:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.700 12:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:21:25.700 12:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:21:25.700 12:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:25.700 12:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:25.700 12:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:25.700 12:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:25.700 12:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:25.700 12:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.700 12:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:25.700 12:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.700 12:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:25.700 12:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.700 12:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:25.700 [ 00:21:25.700 { 00:21:25.700 "name": "BaseBdev1", 00:21:25.700 "aliases": [ 00:21:25.700 "2898209e-978c-44ad-8dec-6c05bd60acb3" 00:21:25.700 ], 00:21:25.700 "product_name": "Malloc disk", 00:21:25.700 "block_size": 512, 00:21:25.700 "num_blocks": 65536, 00:21:25.700 "uuid": "2898209e-978c-44ad-8dec-6c05bd60acb3", 00:21:25.700 "assigned_rate_limits": { 00:21:25.700 "rw_ios_per_sec": 0, 00:21:25.700 "rw_mbytes_per_sec": 0, 00:21:25.700 "r_mbytes_per_sec": 0, 00:21:25.700 "w_mbytes_per_sec": 0 00:21:25.700 }, 00:21:25.700 "claimed": true, 00:21:25.700 "claim_type": "exclusive_write", 00:21:25.700 "zoned": false, 00:21:25.700 "supported_io_types": { 00:21:25.700 "read": true, 00:21:25.700 "write": true, 00:21:25.700 "unmap": true, 00:21:25.700 "flush": true, 00:21:25.700 "reset": true, 00:21:25.700 "nvme_admin": false, 00:21:25.700 "nvme_io": false, 00:21:25.700 "nvme_io_md": false, 00:21:25.700 "write_zeroes": true, 00:21:25.700 "zcopy": true, 00:21:25.700 "get_zone_info": false, 00:21:25.700 "zone_management": false, 00:21:25.700 "zone_append": false, 00:21:25.700 "compare": false, 00:21:25.700 "compare_and_write": false, 00:21:25.700 "abort": true, 00:21:25.700 "seek_hole": false, 00:21:25.700 "seek_data": false, 00:21:25.700 "copy": true, 00:21:25.700 "nvme_iov_md": false 00:21:25.700 }, 00:21:25.700 "memory_domains": [ 00:21:25.700 { 00:21:25.700 "dma_device_id": "system", 00:21:25.700 "dma_device_type": 1 00:21:25.700 }, 00:21:25.700 { 00:21:25.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:25.700 "dma_device_type": 2 00:21:25.700 } 00:21:25.700 ], 00:21:25.700 "driver_specific": {} 00:21:25.700 } 00:21:25.700 ] 00:21:25.700 12:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.700 12:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:25.700 12:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:25.700 12:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:25.700 12:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:25.700 12:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:25.700 12:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:25.700 12:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:25.700 12:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:25.700 12:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:25.700 12:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:25.700 12:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:25.700 12:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:25.700 12:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:25.700 12:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.700 12:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:25.700 12:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.700 12:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:25.700 "name": "Existed_Raid", 00:21:25.700 "uuid": "cb49e636-e126-4eca-8a71-7170afa4a828", 00:21:25.700 "strip_size_kb": 64, 00:21:25.700 "state": "configuring", 00:21:25.700 "raid_level": "raid5f", 00:21:25.700 "superblock": true, 00:21:25.700 "num_base_bdevs": 4, 00:21:25.700 "num_base_bdevs_discovered": 1, 00:21:25.700 "num_base_bdevs_operational": 4, 00:21:25.700 "base_bdevs_list": [ 00:21:25.700 { 00:21:25.700 "name": "BaseBdev1", 00:21:25.700 "uuid": "2898209e-978c-44ad-8dec-6c05bd60acb3", 00:21:25.700 "is_configured": true, 00:21:25.700 "data_offset": 2048, 00:21:25.700 "data_size": 63488 00:21:25.700 }, 00:21:25.700 { 00:21:25.700 "name": "BaseBdev2", 00:21:25.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.700 "is_configured": false, 00:21:25.700 "data_offset": 0, 00:21:25.700 "data_size": 0 00:21:25.700 }, 00:21:25.700 { 00:21:25.700 "name": "BaseBdev3", 00:21:25.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.700 "is_configured": false, 00:21:25.700 "data_offset": 0, 00:21:25.700 "data_size": 0 00:21:25.700 }, 00:21:25.700 { 00:21:25.700 "name": "BaseBdev4", 00:21:25.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.700 "is_configured": false, 00:21:25.700 "data_offset": 0, 00:21:25.700 "data_size": 0 00:21:25.700 } 00:21:25.700 ] 00:21:25.700 }' 00:21:25.700 12:20:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:25.700 12:20:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.291 12:20:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:26.291 12:20:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.291 12:20:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.291 [2024-11-25 12:20:22.179816] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:26.291 [2024-11-25 12:20:22.179959] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:21:26.291 12:20:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.291 12:20:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:26.291 12:20:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.291 12:20:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.291 [2024-11-25 12:20:22.187898] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:26.291 [2024-11-25 12:20:22.190726] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:26.291 [2024-11-25 12:20:22.190794] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:26.291 [2024-11-25 12:20:22.190809] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:26.291 [2024-11-25 12:20:22.190826] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:26.291 [2024-11-25 12:20:22.190835] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:26.291 [2024-11-25 12:20:22.190848] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:26.291 12:20:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.291 12:20:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:21:26.291 12:20:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:26.291 12:20:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:26.291 12:20:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:26.291 12:20:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:26.291 12:20:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:26.291 12:20:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:26.291 12:20:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:26.291 12:20:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:26.291 12:20:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:26.291 12:20:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:26.291 12:20:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:26.291 12:20:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:26.291 12:20:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:26.291 12:20:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.292 12:20:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.292 12:20:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.292 12:20:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:26.292 "name": "Existed_Raid", 00:21:26.292 "uuid": "9b7aa06a-9904-42a7-bfda-bb05fc948d6e", 00:21:26.292 "strip_size_kb": 64, 00:21:26.292 "state": "configuring", 00:21:26.292 "raid_level": "raid5f", 00:21:26.292 "superblock": true, 00:21:26.292 "num_base_bdevs": 4, 00:21:26.292 "num_base_bdevs_discovered": 1, 00:21:26.292 "num_base_bdevs_operational": 4, 00:21:26.292 "base_bdevs_list": [ 00:21:26.292 { 00:21:26.292 "name": "BaseBdev1", 00:21:26.292 "uuid": "2898209e-978c-44ad-8dec-6c05bd60acb3", 00:21:26.292 "is_configured": true, 00:21:26.292 "data_offset": 2048, 00:21:26.292 "data_size": 63488 00:21:26.292 }, 00:21:26.292 { 00:21:26.292 "name": "BaseBdev2", 00:21:26.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:26.292 "is_configured": false, 00:21:26.292 "data_offset": 0, 00:21:26.292 "data_size": 0 00:21:26.292 }, 00:21:26.292 { 00:21:26.292 "name": "BaseBdev3", 00:21:26.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:26.292 "is_configured": false, 00:21:26.292 "data_offset": 0, 00:21:26.292 "data_size": 0 00:21:26.292 }, 00:21:26.292 { 00:21:26.292 "name": "BaseBdev4", 00:21:26.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:26.292 "is_configured": false, 00:21:26.292 "data_offset": 0, 00:21:26.292 "data_size": 0 00:21:26.292 } 00:21:26.292 ] 00:21:26.292 }' 00:21:26.292 12:20:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:26.292 12:20:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.859 12:20:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:26.859 12:20:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.859 12:20:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.859 [2024-11-25 12:20:22.746423] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:26.859 BaseBdev2 00:21:26.859 12:20:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.859 12:20:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:21:26.859 12:20:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:21:26.859 12:20:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:26.859 12:20:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:26.859 12:20:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:26.859 12:20:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:26.860 12:20:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:26.860 12:20:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.860 12:20:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.860 12:20:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.860 12:20:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:26.860 12:20:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.860 12:20:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.860 [ 00:21:26.860 { 00:21:26.860 "name": "BaseBdev2", 00:21:26.860 "aliases": [ 00:21:26.860 "4ab48243-c87d-4e47-8a15-c870a3e906c0" 00:21:26.860 ], 00:21:26.860 "product_name": "Malloc disk", 00:21:26.860 "block_size": 512, 00:21:26.860 "num_blocks": 65536, 00:21:26.860 "uuid": "4ab48243-c87d-4e47-8a15-c870a3e906c0", 00:21:26.860 "assigned_rate_limits": { 00:21:26.860 "rw_ios_per_sec": 0, 00:21:26.860 "rw_mbytes_per_sec": 0, 00:21:26.860 "r_mbytes_per_sec": 0, 00:21:26.860 "w_mbytes_per_sec": 0 00:21:26.860 }, 00:21:26.860 "claimed": true, 00:21:26.860 "claim_type": "exclusive_write", 00:21:26.860 "zoned": false, 00:21:26.860 "supported_io_types": { 00:21:26.860 "read": true, 00:21:26.860 "write": true, 00:21:26.860 "unmap": true, 00:21:26.860 "flush": true, 00:21:26.860 "reset": true, 00:21:26.860 "nvme_admin": false, 00:21:26.860 "nvme_io": false, 00:21:26.860 "nvme_io_md": false, 00:21:26.860 "write_zeroes": true, 00:21:26.860 "zcopy": true, 00:21:26.860 "get_zone_info": false, 00:21:26.860 "zone_management": false, 00:21:26.860 "zone_append": false, 00:21:26.860 "compare": false, 00:21:26.860 "compare_and_write": false, 00:21:26.860 "abort": true, 00:21:26.860 "seek_hole": false, 00:21:26.860 "seek_data": false, 00:21:26.860 "copy": true, 00:21:26.860 "nvme_iov_md": false 00:21:26.860 }, 00:21:26.860 "memory_domains": [ 00:21:26.860 { 00:21:26.860 "dma_device_id": "system", 00:21:26.860 "dma_device_type": 1 00:21:26.860 }, 00:21:26.860 { 00:21:26.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:26.860 "dma_device_type": 2 00:21:26.860 } 00:21:26.860 ], 00:21:26.860 "driver_specific": {} 00:21:26.860 } 00:21:26.860 ] 00:21:26.860 12:20:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.860 12:20:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:26.860 12:20:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:26.860 12:20:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:26.860 12:20:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:26.860 12:20:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:26.860 12:20:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:26.860 12:20:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:26.860 12:20:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:26.860 12:20:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:26.860 12:20:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:26.860 12:20:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:26.860 12:20:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:26.860 12:20:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:26.860 12:20:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:26.860 12:20:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.860 12:20:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.860 12:20:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:26.860 12:20:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.860 12:20:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:26.860 "name": "Existed_Raid", 00:21:26.860 "uuid": "9b7aa06a-9904-42a7-bfda-bb05fc948d6e", 00:21:26.860 "strip_size_kb": 64, 00:21:26.860 "state": "configuring", 00:21:26.860 "raid_level": "raid5f", 00:21:26.860 "superblock": true, 00:21:26.860 "num_base_bdevs": 4, 00:21:26.860 "num_base_bdevs_discovered": 2, 00:21:26.860 "num_base_bdevs_operational": 4, 00:21:26.860 "base_bdevs_list": [ 00:21:26.860 { 00:21:26.860 "name": "BaseBdev1", 00:21:26.860 "uuid": "2898209e-978c-44ad-8dec-6c05bd60acb3", 00:21:26.860 "is_configured": true, 00:21:26.860 "data_offset": 2048, 00:21:26.860 "data_size": 63488 00:21:26.860 }, 00:21:26.860 { 00:21:26.860 "name": "BaseBdev2", 00:21:26.860 "uuid": "4ab48243-c87d-4e47-8a15-c870a3e906c0", 00:21:26.860 "is_configured": true, 00:21:26.860 "data_offset": 2048, 00:21:26.860 "data_size": 63488 00:21:26.860 }, 00:21:26.860 { 00:21:26.860 "name": "BaseBdev3", 00:21:26.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:26.860 "is_configured": false, 00:21:26.860 "data_offset": 0, 00:21:26.860 "data_size": 0 00:21:26.860 }, 00:21:26.860 { 00:21:26.860 "name": "BaseBdev4", 00:21:26.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:26.860 "is_configured": false, 00:21:26.860 "data_offset": 0, 00:21:26.860 "data_size": 0 00:21:26.860 } 00:21:26.860 ] 00:21:26.860 }' 00:21:26.860 12:20:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:26.860 12:20:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.428 12:20:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:21:27.428 12:20:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.428 12:20:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.428 [2024-11-25 12:20:23.350122] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:27.428 BaseBdev3 00:21:27.428 12:20:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.428 12:20:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:21:27.428 12:20:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:21:27.428 12:20:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:27.428 12:20:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:27.428 12:20:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:27.428 12:20:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:27.428 12:20:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:27.428 12:20:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.429 12:20:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.429 12:20:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.429 12:20:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:27.429 12:20:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.429 12:20:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.429 [ 00:21:27.429 { 00:21:27.429 "name": "BaseBdev3", 00:21:27.429 "aliases": [ 00:21:27.429 "43b3538e-5c96-48c7-bda4-1d7f9799f7fe" 00:21:27.429 ], 00:21:27.429 "product_name": "Malloc disk", 00:21:27.429 "block_size": 512, 00:21:27.429 "num_blocks": 65536, 00:21:27.429 "uuid": "43b3538e-5c96-48c7-bda4-1d7f9799f7fe", 00:21:27.429 "assigned_rate_limits": { 00:21:27.429 "rw_ios_per_sec": 0, 00:21:27.429 "rw_mbytes_per_sec": 0, 00:21:27.429 "r_mbytes_per_sec": 0, 00:21:27.429 "w_mbytes_per_sec": 0 00:21:27.429 }, 00:21:27.429 "claimed": true, 00:21:27.429 "claim_type": "exclusive_write", 00:21:27.429 "zoned": false, 00:21:27.429 "supported_io_types": { 00:21:27.429 "read": true, 00:21:27.429 "write": true, 00:21:27.429 "unmap": true, 00:21:27.429 "flush": true, 00:21:27.429 "reset": true, 00:21:27.429 "nvme_admin": false, 00:21:27.429 "nvme_io": false, 00:21:27.429 "nvme_io_md": false, 00:21:27.429 "write_zeroes": true, 00:21:27.429 "zcopy": true, 00:21:27.429 "get_zone_info": false, 00:21:27.429 "zone_management": false, 00:21:27.429 "zone_append": false, 00:21:27.429 "compare": false, 00:21:27.429 "compare_and_write": false, 00:21:27.429 "abort": true, 00:21:27.429 "seek_hole": false, 00:21:27.429 "seek_data": false, 00:21:27.429 "copy": true, 00:21:27.429 "nvme_iov_md": false 00:21:27.429 }, 00:21:27.429 "memory_domains": [ 00:21:27.429 { 00:21:27.429 "dma_device_id": "system", 00:21:27.429 "dma_device_type": 1 00:21:27.429 }, 00:21:27.429 { 00:21:27.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:27.429 "dma_device_type": 2 00:21:27.429 } 00:21:27.429 ], 00:21:27.429 "driver_specific": {} 00:21:27.429 } 00:21:27.429 ] 00:21:27.429 12:20:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.429 12:20:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:27.429 12:20:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:27.429 12:20:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:27.429 12:20:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:27.429 12:20:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:27.429 12:20:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:27.429 12:20:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:27.429 12:20:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:27.429 12:20:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:27.429 12:20:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:27.429 12:20:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:27.429 12:20:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:27.429 12:20:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:27.429 12:20:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:27.429 12:20:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:27.429 12:20:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.429 12:20:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.429 12:20:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.429 12:20:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:27.429 "name": "Existed_Raid", 00:21:27.429 "uuid": "9b7aa06a-9904-42a7-bfda-bb05fc948d6e", 00:21:27.429 "strip_size_kb": 64, 00:21:27.429 "state": "configuring", 00:21:27.429 "raid_level": "raid5f", 00:21:27.429 "superblock": true, 00:21:27.429 "num_base_bdevs": 4, 00:21:27.429 "num_base_bdevs_discovered": 3, 00:21:27.429 "num_base_bdevs_operational": 4, 00:21:27.429 "base_bdevs_list": [ 00:21:27.429 { 00:21:27.429 "name": "BaseBdev1", 00:21:27.429 "uuid": "2898209e-978c-44ad-8dec-6c05bd60acb3", 00:21:27.429 "is_configured": true, 00:21:27.429 "data_offset": 2048, 00:21:27.429 "data_size": 63488 00:21:27.429 }, 00:21:27.429 { 00:21:27.429 "name": "BaseBdev2", 00:21:27.429 "uuid": "4ab48243-c87d-4e47-8a15-c870a3e906c0", 00:21:27.429 "is_configured": true, 00:21:27.429 "data_offset": 2048, 00:21:27.429 "data_size": 63488 00:21:27.429 }, 00:21:27.429 { 00:21:27.429 "name": "BaseBdev3", 00:21:27.429 "uuid": "43b3538e-5c96-48c7-bda4-1d7f9799f7fe", 00:21:27.429 "is_configured": true, 00:21:27.429 "data_offset": 2048, 00:21:27.429 "data_size": 63488 00:21:27.429 }, 00:21:27.429 { 00:21:27.429 "name": "BaseBdev4", 00:21:27.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:27.429 "is_configured": false, 00:21:27.429 "data_offset": 0, 00:21:27.429 "data_size": 0 00:21:27.429 } 00:21:27.429 ] 00:21:27.429 }' 00:21:27.429 12:20:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:27.429 12:20:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.997 12:20:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:21:27.997 12:20:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.997 12:20:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.997 [2024-11-25 12:20:23.955145] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:27.997 [2024-11-25 12:20:23.955629] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:27.997 [2024-11-25 12:20:23.955655] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:27.997 BaseBdev4 00:21:27.997 [2024-11-25 12:20:23.955982] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:27.997 12:20:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.997 12:20:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:21:27.997 12:20:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:21:27.997 12:20:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:27.997 12:20:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:27.997 12:20:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:27.997 12:20:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:27.997 12:20:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:27.997 12:20:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.997 12:20:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.997 [2024-11-25 12:20:23.963246] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:27.997 [2024-11-25 12:20:23.963312] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:21:27.997 [2024-11-25 12:20:23.963726] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:27.997 12:20:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.997 12:20:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:27.997 12:20:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.997 12:20:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.997 [ 00:21:27.997 { 00:21:27.997 "name": "BaseBdev4", 00:21:27.997 "aliases": [ 00:21:27.997 "3a241690-4a0d-4641-8a65-3e4a66c64ac2" 00:21:27.997 ], 00:21:27.997 "product_name": "Malloc disk", 00:21:27.997 "block_size": 512, 00:21:27.997 "num_blocks": 65536, 00:21:27.997 "uuid": "3a241690-4a0d-4641-8a65-3e4a66c64ac2", 00:21:27.997 "assigned_rate_limits": { 00:21:27.997 "rw_ios_per_sec": 0, 00:21:27.997 "rw_mbytes_per_sec": 0, 00:21:27.997 "r_mbytes_per_sec": 0, 00:21:27.997 "w_mbytes_per_sec": 0 00:21:27.997 }, 00:21:27.997 "claimed": true, 00:21:27.997 "claim_type": "exclusive_write", 00:21:27.997 "zoned": false, 00:21:27.997 "supported_io_types": { 00:21:27.997 "read": true, 00:21:27.997 "write": true, 00:21:27.997 "unmap": true, 00:21:27.997 "flush": true, 00:21:27.997 "reset": true, 00:21:27.997 "nvme_admin": false, 00:21:27.997 "nvme_io": false, 00:21:27.997 "nvme_io_md": false, 00:21:27.997 "write_zeroes": true, 00:21:27.997 "zcopy": true, 00:21:27.997 "get_zone_info": false, 00:21:27.997 "zone_management": false, 00:21:27.997 "zone_append": false, 00:21:27.997 "compare": false, 00:21:27.997 "compare_and_write": false, 00:21:27.997 "abort": true, 00:21:27.997 "seek_hole": false, 00:21:27.997 "seek_data": false, 00:21:27.997 "copy": true, 00:21:27.997 "nvme_iov_md": false 00:21:27.997 }, 00:21:27.997 "memory_domains": [ 00:21:27.997 { 00:21:27.997 "dma_device_id": "system", 00:21:27.997 "dma_device_type": 1 00:21:27.997 }, 00:21:27.997 { 00:21:27.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:27.997 "dma_device_type": 2 00:21:27.997 } 00:21:27.997 ], 00:21:27.997 "driver_specific": {} 00:21:27.997 } 00:21:27.997 ] 00:21:27.997 12:20:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.997 12:20:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:27.997 12:20:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:27.997 12:20:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:27.997 12:20:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:21:27.997 12:20:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:27.997 12:20:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:27.997 12:20:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:27.997 12:20:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:27.997 12:20:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:27.997 12:20:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:27.997 12:20:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:27.997 12:20:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:27.997 12:20:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:27.997 12:20:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:27.997 12:20:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:27.997 12:20:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.997 12:20:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.997 12:20:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.997 12:20:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:27.997 "name": "Existed_Raid", 00:21:27.997 "uuid": "9b7aa06a-9904-42a7-bfda-bb05fc948d6e", 00:21:27.997 "strip_size_kb": 64, 00:21:27.997 "state": "online", 00:21:27.997 "raid_level": "raid5f", 00:21:27.997 "superblock": true, 00:21:27.997 "num_base_bdevs": 4, 00:21:27.997 "num_base_bdevs_discovered": 4, 00:21:27.997 "num_base_bdevs_operational": 4, 00:21:27.997 "base_bdevs_list": [ 00:21:27.997 { 00:21:27.997 "name": "BaseBdev1", 00:21:27.997 "uuid": "2898209e-978c-44ad-8dec-6c05bd60acb3", 00:21:27.997 "is_configured": true, 00:21:27.997 "data_offset": 2048, 00:21:27.997 "data_size": 63488 00:21:27.997 }, 00:21:27.997 { 00:21:27.997 "name": "BaseBdev2", 00:21:27.997 "uuid": "4ab48243-c87d-4e47-8a15-c870a3e906c0", 00:21:27.997 "is_configured": true, 00:21:27.997 "data_offset": 2048, 00:21:27.997 "data_size": 63488 00:21:27.997 }, 00:21:27.997 { 00:21:27.997 "name": "BaseBdev3", 00:21:27.997 "uuid": "43b3538e-5c96-48c7-bda4-1d7f9799f7fe", 00:21:27.997 "is_configured": true, 00:21:27.997 "data_offset": 2048, 00:21:27.997 "data_size": 63488 00:21:27.997 }, 00:21:27.997 { 00:21:27.997 "name": "BaseBdev4", 00:21:27.997 "uuid": "3a241690-4a0d-4641-8a65-3e4a66c64ac2", 00:21:27.997 "is_configured": true, 00:21:27.997 "data_offset": 2048, 00:21:27.997 "data_size": 63488 00:21:27.997 } 00:21:27.997 ] 00:21:27.997 }' 00:21:27.997 12:20:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:27.997 12:20:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.565 12:20:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:21:28.565 12:20:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:28.565 12:20:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:28.565 12:20:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:28.565 12:20:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:21:28.565 12:20:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:28.565 12:20:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:28.565 12:20:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:28.565 12:20:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.565 12:20:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.565 [2024-11-25 12:20:24.520479] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:28.565 12:20:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.565 12:20:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:28.565 "name": "Existed_Raid", 00:21:28.565 "aliases": [ 00:21:28.565 "9b7aa06a-9904-42a7-bfda-bb05fc948d6e" 00:21:28.565 ], 00:21:28.565 "product_name": "Raid Volume", 00:21:28.565 "block_size": 512, 00:21:28.565 "num_blocks": 190464, 00:21:28.565 "uuid": "9b7aa06a-9904-42a7-bfda-bb05fc948d6e", 00:21:28.565 "assigned_rate_limits": { 00:21:28.565 "rw_ios_per_sec": 0, 00:21:28.565 "rw_mbytes_per_sec": 0, 00:21:28.565 "r_mbytes_per_sec": 0, 00:21:28.565 "w_mbytes_per_sec": 0 00:21:28.565 }, 00:21:28.565 "claimed": false, 00:21:28.565 "zoned": false, 00:21:28.565 "supported_io_types": { 00:21:28.565 "read": true, 00:21:28.565 "write": true, 00:21:28.565 "unmap": false, 00:21:28.565 "flush": false, 00:21:28.565 "reset": true, 00:21:28.565 "nvme_admin": false, 00:21:28.565 "nvme_io": false, 00:21:28.565 "nvme_io_md": false, 00:21:28.565 "write_zeroes": true, 00:21:28.565 "zcopy": false, 00:21:28.565 "get_zone_info": false, 00:21:28.565 "zone_management": false, 00:21:28.565 "zone_append": false, 00:21:28.565 "compare": false, 00:21:28.565 "compare_and_write": false, 00:21:28.565 "abort": false, 00:21:28.565 "seek_hole": false, 00:21:28.565 "seek_data": false, 00:21:28.565 "copy": false, 00:21:28.565 "nvme_iov_md": false 00:21:28.565 }, 00:21:28.565 "driver_specific": { 00:21:28.565 "raid": { 00:21:28.565 "uuid": "9b7aa06a-9904-42a7-bfda-bb05fc948d6e", 00:21:28.565 "strip_size_kb": 64, 00:21:28.565 "state": "online", 00:21:28.565 "raid_level": "raid5f", 00:21:28.565 "superblock": true, 00:21:28.565 "num_base_bdevs": 4, 00:21:28.565 "num_base_bdevs_discovered": 4, 00:21:28.565 "num_base_bdevs_operational": 4, 00:21:28.565 "base_bdevs_list": [ 00:21:28.565 { 00:21:28.565 "name": "BaseBdev1", 00:21:28.565 "uuid": "2898209e-978c-44ad-8dec-6c05bd60acb3", 00:21:28.565 "is_configured": true, 00:21:28.565 "data_offset": 2048, 00:21:28.565 "data_size": 63488 00:21:28.565 }, 00:21:28.565 { 00:21:28.565 "name": "BaseBdev2", 00:21:28.565 "uuid": "4ab48243-c87d-4e47-8a15-c870a3e906c0", 00:21:28.565 "is_configured": true, 00:21:28.565 "data_offset": 2048, 00:21:28.565 "data_size": 63488 00:21:28.565 }, 00:21:28.565 { 00:21:28.565 "name": "BaseBdev3", 00:21:28.566 "uuid": "43b3538e-5c96-48c7-bda4-1d7f9799f7fe", 00:21:28.566 "is_configured": true, 00:21:28.566 "data_offset": 2048, 00:21:28.566 "data_size": 63488 00:21:28.566 }, 00:21:28.566 { 00:21:28.566 "name": "BaseBdev4", 00:21:28.566 "uuid": "3a241690-4a0d-4641-8a65-3e4a66c64ac2", 00:21:28.566 "is_configured": true, 00:21:28.566 "data_offset": 2048, 00:21:28.566 "data_size": 63488 00:21:28.566 } 00:21:28.566 ] 00:21:28.566 } 00:21:28.566 } 00:21:28.566 }' 00:21:28.566 12:20:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:28.566 12:20:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:21:28.566 BaseBdev2 00:21:28.566 BaseBdev3 00:21:28.566 BaseBdev4' 00:21:28.566 12:20:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:28.826 12:20:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:28.826 12:20:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:28.826 12:20:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:21:28.826 12:20:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:28.826 12:20:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.826 12:20:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.826 12:20:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.826 12:20:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:28.826 12:20:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:28.826 12:20:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:28.826 12:20:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:28.826 12:20:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.826 12:20:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:28.826 12:20:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.826 12:20:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.826 12:20:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:28.826 12:20:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:28.826 12:20:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:28.826 12:20:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:21:28.826 12:20:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.826 12:20:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.826 12:20:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:28.826 12:20:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.826 12:20:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:28.826 12:20:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:28.826 12:20:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:28.826 12:20:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:21:28.826 12:20:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:28.826 12:20:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.826 12:20:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.826 12:20:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.826 12:20:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:28.826 12:20:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:28.826 12:20:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:28.826 12:20:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.826 12:20:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.826 [2024-11-25 12:20:24.904382] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:29.085 12:20:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.085 12:20:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:21:29.085 12:20:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:21:29.085 12:20:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:29.085 12:20:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:21:29.085 12:20:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:21:29.085 12:20:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:21:29.085 12:20:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:29.085 12:20:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:29.085 12:20:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:29.085 12:20:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:29.085 12:20:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:29.085 12:20:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:29.085 12:20:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:29.085 12:20:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:29.085 12:20:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:29.085 12:20:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:29.085 12:20:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:29.085 12:20:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.085 12:20:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.085 12:20:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.085 12:20:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:29.085 "name": "Existed_Raid", 00:21:29.085 "uuid": "9b7aa06a-9904-42a7-bfda-bb05fc948d6e", 00:21:29.085 "strip_size_kb": 64, 00:21:29.085 "state": "online", 00:21:29.085 "raid_level": "raid5f", 00:21:29.085 "superblock": true, 00:21:29.085 "num_base_bdevs": 4, 00:21:29.085 "num_base_bdevs_discovered": 3, 00:21:29.085 "num_base_bdevs_operational": 3, 00:21:29.085 "base_bdevs_list": [ 00:21:29.085 { 00:21:29.085 "name": null, 00:21:29.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:29.085 "is_configured": false, 00:21:29.085 "data_offset": 0, 00:21:29.085 "data_size": 63488 00:21:29.085 }, 00:21:29.085 { 00:21:29.085 "name": "BaseBdev2", 00:21:29.085 "uuid": "4ab48243-c87d-4e47-8a15-c870a3e906c0", 00:21:29.085 "is_configured": true, 00:21:29.085 "data_offset": 2048, 00:21:29.085 "data_size": 63488 00:21:29.085 }, 00:21:29.085 { 00:21:29.085 "name": "BaseBdev3", 00:21:29.085 "uuid": "43b3538e-5c96-48c7-bda4-1d7f9799f7fe", 00:21:29.085 "is_configured": true, 00:21:29.085 "data_offset": 2048, 00:21:29.085 "data_size": 63488 00:21:29.085 }, 00:21:29.085 { 00:21:29.085 "name": "BaseBdev4", 00:21:29.085 "uuid": "3a241690-4a0d-4641-8a65-3e4a66c64ac2", 00:21:29.085 "is_configured": true, 00:21:29.085 "data_offset": 2048, 00:21:29.085 "data_size": 63488 00:21:29.085 } 00:21:29.085 ] 00:21:29.085 }' 00:21:29.085 12:20:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:29.085 12:20:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.653 12:20:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:21:29.653 12:20:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:29.653 12:20:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:29.653 12:20:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.653 12:20:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.653 12:20:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:29.653 12:20:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.653 12:20:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:29.653 12:20:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:29.653 12:20:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:21:29.653 12:20:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.653 12:20:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.653 [2024-11-25 12:20:25.573271] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:29.653 [2024-11-25 12:20:25.573495] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:29.653 [2024-11-25 12:20:25.659109] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:29.653 12:20:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.653 12:20:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:29.653 12:20:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:29.653 12:20:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:29.653 12:20:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.653 12:20:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.653 12:20:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:29.653 12:20:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.653 12:20:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:29.653 12:20:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:29.653 12:20:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:21:29.653 12:20:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.653 12:20:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.653 [2024-11-25 12:20:25.719142] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:29.912 12:20:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.912 12:20:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:29.912 12:20:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:29.912 12:20:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:29.912 12:20:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.912 12:20:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.912 12:20:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:29.912 12:20:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.912 12:20:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:29.912 12:20:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:29.912 12:20:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:21:29.912 12:20:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.912 12:20:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.912 [2024-11-25 12:20:25.866462] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:21:29.912 [2024-11-25 12:20:25.866528] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:21:29.912 12:20:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.912 12:20:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:29.912 12:20:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:29.912 12:20:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:29.912 12:20:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.912 12:20:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.912 12:20:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:21:29.912 12:20:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.172 12:20:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:21:30.172 12:20:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:21:30.172 12:20:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:21:30.172 12:20:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:21:30.172 12:20:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:30.172 12:20:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:30.172 12:20:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.172 12:20:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.172 BaseBdev2 00:21:30.172 12:20:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.172 12:20:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:21:30.172 12:20:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:21:30.172 12:20:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:30.172 12:20:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:30.172 12:20:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:30.172 12:20:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:30.172 12:20:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:30.172 12:20:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.172 12:20:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.172 12:20:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.172 12:20:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:30.172 12:20:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.172 12:20:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.172 [ 00:21:30.172 { 00:21:30.172 "name": "BaseBdev2", 00:21:30.172 "aliases": [ 00:21:30.172 "19e48426-d9e5-41a1-a0ab-afd18f0a22dc" 00:21:30.172 ], 00:21:30.172 "product_name": "Malloc disk", 00:21:30.172 "block_size": 512, 00:21:30.172 "num_blocks": 65536, 00:21:30.172 "uuid": "19e48426-d9e5-41a1-a0ab-afd18f0a22dc", 00:21:30.172 "assigned_rate_limits": { 00:21:30.172 "rw_ios_per_sec": 0, 00:21:30.172 "rw_mbytes_per_sec": 0, 00:21:30.172 "r_mbytes_per_sec": 0, 00:21:30.172 "w_mbytes_per_sec": 0 00:21:30.172 }, 00:21:30.172 "claimed": false, 00:21:30.172 "zoned": false, 00:21:30.172 "supported_io_types": { 00:21:30.172 "read": true, 00:21:30.172 "write": true, 00:21:30.172 "unmap": true, 00:21:30.172 "flush": true, 00:21:30.172 "reset": true, 00:21:30.172 "nvme_admin": false, 00:21:30.172 "nvme_io": false, 00:21:30.172 "nvme_io_md": false, 00:21:30.172 "write_zeroes": true, 00:21:30.172 "zcopy": true, 00:21:30.172 "get_zone_info": false, 00:21:30.172 "zone_management": false, 00:21:30.172 "zone_append": false, 00:21:30.172 "compare": false, 00:21:30.172 "compare_and_write": false, 00:21:30.172 "abort": true, 00:21:30.172 "seek_hole": false, 00:21:30.172 "seek_data": false, 00:21:30.172 "copy": true, 00:21:30.172 "nvme_iov_md": false 00:21:30.172 }, 00:21:30.172 "memory_domains": [ 00:21:30.172 { 00:21:30.172 "dma_device_id": "system", 00:21:30.172 "dma_device_type": 1 00:21:30.172 }, 00:21:30.172 { 00:21:30.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:30.172 "dma_device_type": 2 00:21:30.172 } 00:21:30.172 ], 00:21:30.172 "driver_specific": {} 00:21:30.172 } 00:21:30.172 ] 00:21:30.172 12:20:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.172 12:20:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:30.172 12:20:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:30.172 12:20:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:30.172 12:20:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:21:30.172 12:20:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.172 12:20:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.172 BaseBdev3 00:21:30.172 12:20:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.172 12:20:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:21:30.172 12:20:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:21:30.172 12:20:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:30.172 12:20:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:30.172 12:20:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:30.172 12:20:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:30.172 12:20:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:30.172 12:20:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.172 12:20:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.172 12:20:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.172 12:20:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:30.172 12:20:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.172 12:20:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.172 [ 00:21:30.172 { 00:21:30.172 "name": "BaseBdev3", 00:21:30.173 "aliases": [ 00:21:30.173 "84c85fe3-fb7e-45e3-b63a-e24ddf32f7c6" 00:21:30.173 ], 00:21:30.173 "product_name": "Malloc disk", 00:21:30.173 "block_size": 512, 00:21:30.173 "num_blocks": 65536, 00:21:30.173 "uuid": "84c85fe3-fb7e-45e3-b63a-e24ddf32f7c6", 00:21:30.173 "assigned_rate_limits": { 00:21:30.173 "rw_ios_per_sec": 0, 00:21:30.173 "rw_mbytes_per_sec": 0, 00:21:30.173 "r_mbytes_per_sec": 0, 00:21:30.173 "w_mbytes_per_sec": 0 00:21:30.173 }, 00:21:30.173 "claimed": false, 00:21:30.173 "zoned": false, 00:21:30.173 "supported_io_types": { 00:21:30.173 "read": true, 00:21:30.173 "write": true, 00:21:30.173 "unmap": true, 00:21:30.173 "flush": true, 00:21:30.173 "reset": true, 00:21:30.173 "nvme_admin": false, 00:21:30.173 "nvme_io": false, 00:21:30.173 "nvme_io_md": false, 00:21:30.173 "write_zeroes": true, 00:21:30.173 "zcopy": true, 00:21:30.173 "get_zone_info": false, 00:21:30.173 "zone_management": false, 00:21:30.173 "zone_append": false, 00:21:30.173 "compare": false, 00:21:30.173 "compare_and_write": false, 00:21:30.173 "abort": true, 00:21:30.173 "seek_hole": false, 00:21:30.173 "seek_data": false, 00:21:30.173 "copy": true, 00:21:30.173 "nvme_iov_md": false 00:21:30.173 }, 00:21:30.173 "memory_domains": [ 00:21:30.173 { 00:21:30.173 "dma_device_id": "system", 00:21:30.173 "dma_device_type": 1 00:21:30.173 }, 00:21:30.173 { 00:21:30.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:30.173 "dma_device_type": 2 00:21:30.173 } 00:21:30.173 ], 00:21:30.173 "driver_specific": {} 00:21:30.173 } 00:21:30.173 ] 00:21:30.173 12:20:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.173 12:20:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:30.173 12:20:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:30.173 12:20:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:30.173 12:20:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:21:30.173 12:20:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.173 12:20:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.173 BaseBdev4 00:21:30.173 12:20:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.173 12:20:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:21:30.173 12:20:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:21:30.173 12:20:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:30.173 12:20:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:30.173 12:20:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:30.173 12:20:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:30.173 12:20:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:30.173 12:20:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.173 12:20:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.173 12:20:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.173 12:20:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:30.173 12:20:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.173 12:20:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.173 [ 00:21:30.173 { 00:21:30.173 "name": "BaseBdev4", 00:21:30.173 "aliases": [ 00:21:30.173 "3d4e213a-68df-4a45-9381-42a0643e82c4" 00:21:30.173 ], 00:21:30.173 "product_name": "Malloc disk", 00:21:30.173 "block_size": 512, 00:21:30.173 "num_blocks": 65536, 00:21:30.173 "uuid": "3d4e213a-68df-4a45-9381-42a0643e82c4", 00:21:30.173 "assigned_rate_limits": { 00:21:30.173 "rw_ios_per_sec": 0, 00:21:30.173 "rw_mbytes_per_sec": 0, 00:21:30.173 "r_mbytes_per_sec": 0, 00:21:30.173 "w_mbytes_per_sec": 0 00:21:30.173 }, 00:21:30.173 "claimed": false, 00:21:30.173 "zoned": false, 00:21:30.173 "supported_io_types": { 00:21:30.173 "read": true, 00:21:30.173 "write": true, 00:21:30.173 "unmap": true, 00:21:30.173 "flush": true, 00:21:30.173 "reset": true, 00:21:30.173 "nvme_admin": false, 00:21:30.173 "nvme_io": false, 00:21:30.173 "nvme_io_md": false, 00:21:30.173 "write_zeroes": true, 00:21:30.173 "zcopy": true, 00:21:30.173 "get_zone_info": false, 00:21:30.173 "zone_management": false, 00:21:30.173 "zone_append": false, 00:21:30.173 "compare": false, 00:21:30.173 "compare_and_write": false, 00:21:30.173 "abort": true, 00:21:30.173 "seek_hole": false, 00:21:30.173 "seek_data": false, 00:21:30.173 "copy": true, 00:21:30.173 "nvme_iov_md": false 00:21:30.173 }, 00:21:30.173 "memory_domains": [ 00:21:30.173 { 00:21:30.173 "dma_device_id": "system", 00:21:30.173 "dma_device_type": 1 00:21:30.173 }, 00:21:30.173 { 00:21:30.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:30.173 "dma_device_type": 2 00:21:30.173 } 00:21:30.173 ], 00:21:30.173 "driver_specific": {} 00:21:30.173 } 00:21:30.173 ] 00:21:30.173 12:20:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.173 12:20:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:30.173 12:20:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:30.173 12:20:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:30.173 12:20:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:30.173 12:20:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.173 12:20:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.433 [2024-11-25 12:20:26.262169] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:30.433 [2024-11-25 12:20:26.262226] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:30.433 [2024-11-25 12:20:26.262275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:30.433 [2024-11-25 12:20:26.264849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:30.433 [2024-11-25 12:20:26.264938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:30.433 12:20:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.433 12:20:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:30.433 12:20:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:30.433 12:20:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:30.433 12:20:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:30.433 12:20:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:30.433 12:20:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:30.433 12:20:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:30.433 12:20:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:30.433 12:20:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:30.433 12:20:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:30.433 12:20:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:30.433 12:20:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:30.433 12:20:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.433 12:20:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.433 12:20:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.433 12:20:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:30.433 "name": "Existed_Raid", 00:21:30.433 "uuid": "25be7f17-92a2-4abf-b128-f0d73fd1fbce", 00:21:30.433 "strip_size_kb": 64, 00:21:30.433 "state": "configuring", 00:21:30.433 "raid_level": "raid5f", 00:21:30.433 "superblock": true, 00:21:30.433 "num_base_bdevs": 4, 00:21:30.433 "num_base_bdevs_discovered": 3, 00:21:30.433 "num_base_bdevs_operational": 4, 00:21:30.433 "base_bdevs_list": [ 00:21:30.433 { 00:21:30.433 "name": "BaseBdev1", 00:21:30.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:30.433 "is_configured": false, 00:21:30.433 "data_offset": 0, 00:21:30.433 "data_size": 0 00:21:30.433 }, 00:21:30.433 { 00:21:30.433 "name": "BaseBdev2", 00:21:30.433 "uuid": "19e48426-d9e5-41a1-a0ab-afd18f0a22dc", 00:21:30.433 "is_configured": true, 00:21:30.433 "data_offset": 2048, 00:21:30.433 "data_size": 63488 00:21:30.433 }, 00:21:30.433 { 00:21:30.433 "name": "BaseBdev3", 00:21:30.433 "uuid": "84c85fe3-fb7e-45e3-b63a-e24ddf32f7c6", 00:21:30.433 "is_configured": true, 00:21:30.433 "data_offset": 2048, 00:21:30.433 "data_size": 63488 00:21:30.433 }, 00:21:30.433 { 00:21:30.433 "name": "BaseBdev4", 00:21:30.433 "uuid": "3d4e213a-68df-4a45-9381-42a0643e82c4", 00:21:30.433 "is_configured": true, 00:21:30.433 "data_offset": 2048, 00:21:30.433 "data_size": 63488 00:21:30.433 } 00:21:30.433 ] 00:21:30.433 }' 00:21:30.433 12:20:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:30.433 12:20:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.693 12:20:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:21:30.693 12:20:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.693 12:20:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.693 [2024-11-25 12:20:26.766332] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:30.693 12:20:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.693 12:20:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:30.693 12:20:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:30.693 12:20:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:30.693 12:20:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:30.693 12:20:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:30.693 12:20:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:30.693 12:20:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:30.693 12:20:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:30.693 12:20:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:30.693 12:20:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:30.693 12:20:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:30.693 12:20:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.693 12:20:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:30.693 12:20:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.952 12:20:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.952 12:20:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:30.952 "name": "Existed_Raid", 00:21:30.952 "uuid": "25be7f17-92a2-4abf-b128-f0d73fd1fbce", 00:21:30.952 "strip_size_kb": 64, 00:21:30.952 "state": "configuring", 00:21:30.952 "raid_level": "raid5f", 00:21:30.952 "superblock": true, 00:21:30.952 "num_base_bdevs": 4, 00:21:30.952 "num_base_bdevs_discovered": 2, 00:21:30.952 "num_base_bdevs_operational": 4, 00:21:30.952 "base_bdevs_list": [ 00:21:30.952 { 00:21:30.952 "name": "BaseBdev1", 00:21:30.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:30.952 "is_configured": false, 00:21:30.952 "data_offset": 0, 00:21:30.952 "data_size": 0 00:21:30.952 }, 00:21:30.952 { 00:21:30.952 "name": null, 00:21:30.952 "uuid": "19e48426-d9e5-41a1-a0ab-afd18f0a22dc", 00:21:30.952 "is_configured": false, 00:21:30.952 "data_offset": 0, 00:21:30.952 "data_size": 63488 00:21:30.952 }, 00:21:30.952 { 00:21:30.952 "name": "BaseBdev3", 00:21:30.952 "uuid": "84c85fe3-fb7e-45e3-b63a-e24ddf32f7c6", 00:21:30.952 "is_configured": true, 00:21:30.952 "data_offset": 2048, 00:21:30.952 "data_size": 63488 00:21:30.952 }, 00:21:30.952 { 00:21:30.952 "name": "BaseBdev4", 00:21:30.952 "uuid": "3d4e213a-68df-4a45-9381-42a0643e82c4", 00:21:30.952 "is_configured": true, 00:21:30.952 "data_offset": 2048, 00:21:30.952 "data_size": 63488 00:21:30.952 } 00:21:30.952 ] 00:21:30.952 }' 00:21:30.952 12:20:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:30.952 12:20:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:31.211 12:20:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:31.211 12:20:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:31.211 12:20:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.211 12:20:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:31.469 12:20:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.469 12:20:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:21:31.469 12:20:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:31.469 12:20:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.469 12:20:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:31.469 [2024-11-25 12:20:27.378865] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:31.469 BaseBdev1 00:21:31.469 12:20:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.469 12:20:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:21:31.469 12:20:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:21:31.469 12:20:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:31.469 12:20:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:31.469 12:20:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:31.469 12:20:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:31.469 12:20:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:31.469 12:20:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.470 12:20:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:31.470 12:20:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.470 12:20:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:31.470 12:20:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.470 12:20:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:31.470 [ 00:21:31.470 { 00:21:31.470 "name": "BaseBdev1", 00:21:31.470 "aliases": [ 00:21:31.470 "548a29ed-498a-4139-abc0-e5bf31e39c75" 00:21:31.470 ], 00:21:31.470 "product_name": "Malloc disk", 00:21:31.470 "block_size": 512, 00:21:31.470 "num_blocks": 65536, 00:21:31.470 "uuid": "548a29ed-498a-4139-abc0-e5bf31e39c75", 00:21:31.470 "assigned_rate_limits": { 00:21:31.470 "rw_ios_per_sec": 0, 00:21:31.470 "rw_mbytes_per_sec": 0, 00:21:31.470 "r_mbytes_per_sec": 0, 00:21:31.470 "w_mbytes_per_sec": 0 00:21:31.470 }, 00:21:31.470 "claimed": true, 00:21:31.470 "claim_type": "exclusive_write", 00:21:31.470 "zoned": false, 00:21:31.470 "supported_io_types": { 00:21:31.470 "read": true, 00:21:31.470 "write": true, 00:21:31.470 "unmap": true, 00:21:31.470 "flush": true, 00:21:31.470 "reset": true, 00:21:31.470 "nvme_admin": false, 00:21:31.470 "nvme_io": false, 00:21:31.470 "nvme_io_md": false, 00:21:31.470 "write_zeroes": true, 00:21:31.470 "zcopy": true, 00:21:31.470 "get_zone_info": false, 00:21:31.470 "zone_management": false, 00:21:31.470 "zone_append": false, 00:21:31.470 "compare": false, 00:21:31.470 "compare_and_write": false, 00:21:31.470 "abort": true, 00:21:31.470 "seek_hole": false, 00:21:31.470 "seek_data": false, 00:21:31.470 "copy": true, 00:21:31.470 "nvme_iov_md": false 00:21:31.470 }, 00:21:31.470 "memory_domains": [ 00:21:31.470 { 00:21:31.470 "dma_device_id": "system", 00:21:31.470 "dma_device_type": 1 00:21:31.470 }, 00:21:31.470 { 00:21:31.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:31.470 "dma_device_type": 2 00:21:31.470 } 00:21:31.470 ], 00:21:31.470 "driver_specific": {} 00:21:31.470 } 00:21:31.470 ] 00:21:31.470 12:20:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.470 12:20:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:31.470 12:20:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:31.470 12:20:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:31.470 12:20:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:31.470 12:20:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:31.470 12:20:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:31.470 12:20:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:31.470 12:20:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:31.470 12:20:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:31.470 12:20:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:31.470 12:20:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:31.470 12:20:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:31.470 12:20:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.470 12:20:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:31.470 12:20:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:31.470 12:20:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.470 12:20:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:31.470 "name": "Existed_Raid", 00:21:31.470 "uuid": "25be7f17-92a2-4abf-b128-f0d73fd1fbce", 00:21:31.470 "strip_size_kb": 64, 00:21:31.470 "state": "configuring", 00:21:31.470 "raid_level": "raid5f", 00:21:31.470 "superblock": true, 00:21:31.470 "num_base_bdevs": 4, 00:21:31.470 "num_base_bdevs_discovered": 3, 00:21:31.470 "num_base_bdevs_operational": 4, 00:21:31.470 "base_bdevs_list": [ 00:21:31.470 { 00:21:31.470 "name": "BaseBdev1", 00:21:31.470 "uuid": "548a29ed-498a-4139-abc0-e5bf31e39c75", 00:21:31.470 "is_configured": true, 00:21:31.470 "data_offset": 2048, 00:21:31.470 "data_size": 63488 00:21:31.470 }, 00:21:31.470 { 00:21:31.470 "name": null, 00:21:31.470 "uuid": "19e48426-d9e5-41a1-a0ab-afd18f0a22dc", 00:21:31.470 "is_configured": false, 00:21:31.470 "data_offset": 0, 00:21:31.470 "data_size": 63488 00:21:31.470 }, 00:21:31.470 { 00:21:31.470 "name": "BaseBdev3", 00:21:31.470 "uuid": "84c85fe3-fb7e-45e3-b63a-e24ddf32f7c6", 00:21:31.470 "is_configured": true, 00:21:31.470 "data_offset": 2048, 00:21:31.470 "data_size": 63488 00:21:31.470 }, 00:21:31.470 { 00:21:31.470 "name": "BaseBdev4", 00:21:31.470 "uuid": "3d4e213a-68df-4a45-9381-42a0643e82c4", 00:21:31.470 "is_configured": true, 00:21:31.470 "data_offset": 2048, 00:21:31.470 "data_size": 63488 00:21:31.470 } 00:21:31.470 ] 00:21:31.470 }' 00:21:31.470 12:20:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:31.470 12:20:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:32.038 12:20:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:32.038 12:20:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:32.038 12:20:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.038 12:20:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:32.038 12:20:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.038 12:20:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:21:32.038 12:20:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:21:32.038 12:20:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.038 12:20:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:32.038 [2024-11-25 12:20:27.987100] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:32.038 12:20:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.038 12:20:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:32.038 12:20:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:32.038 12:20:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:32.038 12:20:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:32.038 12:20:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:32.038 12:20:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:32.038 12:20:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:32.038 12:20:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:32.038 12:20:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:32.038 12:20:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:32.038 12:20:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:32.038 12:20:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:32.038 12:20:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.038 12:20:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:32.038 12:20:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.038 12:20:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:32.038 "name": "Existed_Raid", 00:21:32.038 "uuid": "25be7f17-92a2-4abf-b128-f0d73fd1fbce", 00:21:32.038 "strip_size_kb": 64, 00:21:32.038 "state": "configuring", 00:21:32.038 "raid_level": "raid5f", 00:21:32.038 "superblock": true, 00:21:32.038 "num_base_bdevs": 4, 00:21:32.038 "num_base_bdevs_discovered": 2, 00:21:32.038 "num_base_bdevs_operational": 4, 00:21:32.038 "base_bdevs_list": [ 00:21:32.038 { 00:21:32.038 "name": "BaseBdev1", 00:21:32.038 "uuid": "548a29ed-498a-4139-abc0-e5bf31e39c75", 00:21:32.038 "is_configured": true, 00:21:32.038 "data_offset": 2048, 00:21:32.038 "data_size": 63488 00:21:32.038 }, 00:21:32.038 { 00:21:32.038 "name": null, 00:21:32.038 "uuid": "19e48426-d9e5-41a1-a0ab-afd18f0a22dc", 00:21:32.038 "is_configured": false, 00:21:32.038 "data_offset": 0, 00:21:32.038 "data_size": 63488 00:21:32.038 }, 00:21:32.038 { 00:21:32.038 "name": null, 00:21:32.038 "uuid": "84c85fe3-fb7e-45e3-b63a-e24ddf32f7c6", 00:21:32.038 "is_configured": false, 00:21:32.038 "data_offset": 0, 00:21:32.038 "data_size": 63488 00:21:32.038 }, 00:21:32.038 { 00:21:32.038 "name": "BaseBdev4", 00:21:32.038 "uuid": "3d4e213a-68df-4a45-9381-42a0643e82c4", 00:21:32.038 "is_configured": true, 00:21:32.038 "data_offset": 2048, 00:21:32.038 "data_size": 63488 00:21:32.038 } 00:21:32.038 ] 00:21:32.038 }' 00:21:32.038 12:20:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:32.038 12:20:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:32.606 12:20:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:32.606 12:20:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:32.606 12:20:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.606 12:20:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:32.606 12:20:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.606 12:20:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:21:32.606 12:20:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:21:32.606 12:20:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.606 12:20:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:32.606 [2024-11-25 12:20:28.583261] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:32.606 12:20:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.606 12:20:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:32.606 12:20:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:32.606 12:20:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:32.606 12:20:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:32.606 12:20:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:32.606 12:20:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:32.606 12:20:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:32.606 12:20:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:32.606 12:20:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:32.606 12:20:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:32.606 12:20:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:32.606 12:20:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:32.606 12:20:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.606 12:20:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:32.606 12:20:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.606 12:20:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:32.606 "name": "Existed_Raid", 00:21:32.606 "uuid": "25be7f17-92a2-4abf-b128-f0d73fd1fbce", 00:21:32.606 "strip_size_kb": 64, 00:21:32.606 "state": "configuring", 00:21:32.606 "raid_level": "raid5f", 00:21:32.606 "superblock": true, 00:21:32.606 "num_base_bdevs": 4, 00:21:32.606 "num_base_bdevs_discovered": 3, 00:21:32.606 "num_base_bdevs_operational": 4, 00:21:32.606 "base_bdevs_list": [ 00:21:32.606 { 00:21:32.606 "name": "BaseBdev1", 00:21:32.606 "uuid": "548a29ed-498a-4139-abc0-e5bf31e39c75", 00:21:32.606 "is_configured": true, 00:21:32.606 "data_offset": 2048, 00:21:32.606 "data_size": 63488 00:21:32.606 }, 00:21:32.606 { 00:21:32.606 "name": null, 00:21:32.606 "uuid": "19e48426-d9e5-41a1-a0ab-afd18f0a22dc", 00:21:32.606 "is_configured": false, 00:21:32.606 "data_offset": 0, 00:21:32.606 "data_size": 63488 00:21:32.606 }, 00:21:32.606 { 00:21:32.606 "name": "BaseBdev3", 00:21:32.606 "uuid": "84c85fe3-fb7e-45e3-b63a-e24ddf32f7c6", 00:21:32.606 "is_configured": true, 00:21:32.606 "data_offset": 2048, 00:21:32.606 "data_size": 63488 00:21:32.606 }, 00:21:32.606 { 00:21:32.606 "name": "BaseBdev4", 00:21:32.606 "uuid": "3d4e213a-68df-4a45-9381-42a0643e82c4", 00:21:32.606 "is_configured": true, 00:21:32.606 "data_offset": 2048, 00:21:32.606 "data_size": 63488 00:21:32.606 } 00:21:32.606 ] 00:21:32.606 }' 00:21:32.606 12:20:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:32.606 12:20:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:33.196 12:20:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:33.197 12:20:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.197 12:20:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:33.197 12:20:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:33.197 12:20:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.197 12:20:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:21:33.197 12:20:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:33.197 12:20:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.197 12:20:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:33.197 [2024-11-25 12:20:29.155495] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:33.197 12:20:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.197 12:20:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:33.197 12:20:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:33.197 12:20:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:33.197 12:20:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:33.197 12:20:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:33.197 12:20:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:33.197 12:20:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:33.197 12:20:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:33.197 12:20:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:33.197 12:20:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:33.197 12:20:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:33.197 12:20:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:33.197 12:20:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.197 12:20:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:33.197 12:20:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.488 12:20:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:33.488 "name": "Existed_Raid", 00:21:33.488 "uuid": "25be7f17-92a2-4abf-b128-f0d73fd1fbce", 00:21:33.488 "strip_size_kb": 64, 00:21:33.488 "state": "configuring", 00:21:33.488 "raid_level": "raid5f", 00:21:33.488 "superblock": true, 00:21:33.488 "num_base_bdevs": 4, 00:21:33.488 "num_base_bdevs_discovered": 2, 00:21:33.488 "num_base_bdevs_operational": 4, 00:21:33.488 "base_bdevs_list": [ 00:21:33.488 { 00:21:33.488 "name": null, 00:21:33.488 "uuid": "548a29ed-498a-4139-abc0-e5bf31e39c75", 00:21:33.488 "is_configured": false, 00:21:33.488 "data_offset": 0, 00:21:33.488 "data_size": 63488 00:21:33.488 }, 00:21:33.488 { 00:21:33.488 "name": null, 00:21:33.488 "uuid": "19e48426-d9e5-41a1-a0ab-afd18f0a22dc", 00:21:33.488 "is_configured": false, 00:21:33.488 "data_offset": 0, 00:21:33.488 "data_size": 63488 00:21:33.488 }, 00:21:33.488 { 00:21:33.488 "name": "BaseBdev3", 00:21:33.488 "uuid": "84c85fe3-fb7e-45e3-b63a-e24ddf32f7c6", 00:21:33.488 "is_configured": true, 00:21:33.488 "data_offset": 2048, 00:21:33.488 "data_size": 63488 00:21:33.488 }, 00:21:33.488 { 00:21:33.488 "name": "BaseBdev4", 00:21:33.488 "uuid": "3d4e213a-68df-4a45-9381-42a0643e82c4", 00:21:33.488 "is_configured": true, 00:21:33.488 "data_offset": 2048, 00:21:33.488 "data_size": 63488 00:21:33.488 } 00:21:33.488 ] 00:21:33.488 }' 00:21:33.488 12:20:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:33.488 12:20:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:33.747 12:20:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:33.747 12:20:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:33.747 12:20:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.747 12:20:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:33.747 12:20:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.747 12:20:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:21:33.747 12:20:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:21:33.747 12:20:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.747 12:20:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:33.747 [2024-11-25 12:20:29.796108] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:33.747 12:20:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.747 12:20:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:33.747 12:20:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:33.747 12:20:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:33.747 12:20:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:33.747 12:20:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:33.747 12:20:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:33.747 12:20:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:33.747 12:20:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:33.747 12:20:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:33.747 12:20:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:33.747 12:20:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:33.747 12:20:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.747 12:20:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:33.747 12:20:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:33.747 12:20:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.006 12:20:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:34.006 "name": "Existed_Raid", 00:21:34.006 "uuid": "25be7f17-92a2-4abf-b128-f0d73fd1fbce", 00:21:34.006 "strip_size_kb": 64, 00:21:34.006 "state": "configuring", 00:21:34.006 "raid_level": "raid5f", 00:21:34.006 "superblock": true, 00:21:34.006 "num_base_bdevs": 4, 00:21:34.006 "num_base_bdevs_discovered": 3, 00:21:34.006 "num_base_bdevs_operational": 4, 00:21:34.006 "base_bdevs_list": [ 00:21:34.006 { 00:21:34.006 "name": null, 00:21:34.006 "uuid": "548a29ed-498a-4139-abc0-e5bf31e39c75", 00:21:34.006 "is_configured": false, 00:21:34.006 "data_offset": 0, 00:21:34.006 "data_size": 63488 00:21:34.006 }, 00:21:34.006 { 00:21:34.006 "name": "BaseBdev2", 00:21:34.006 "uuid": "19e48426-d9e5-41a1-a0ab-afd18f0a22dc", 00:21:34.006 "is_configured": true, 00:21:34.006 "data_offset": 2048, 00:21:34.006 "data_size": 63488 00:21:34.006 }, 00:21:34.006 { 00:21:34.006 "name": "BaseBdev3", 00:21:34.006 "uuid": "84c85fe3-fb7e-45e3-b63a-e24ddf32f7c6", 00:21:34.006 "is_configured": true, 00:21:34.006 "data_offset": 2048, 00:21:34.006 "data_size": 63488 00:21:34.006 }, 00:21:34.006 { 00:21:34.006 "name": "BaseBdev4", 00:21:34.006 "uuid": "3d4e213a-68df-4a45-9381-42a0643e82c4", 00:21:34.006 "is_configured": true, 00:21:34.006 "data_offset": 2048, 00:21:34.006 "data_size": 63488 00:21:34.006 } 00:21:34.006 ] 00:21:34.006 }' 00:21:34.006 12:20:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:34.006 12:20:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:34.264 12:20:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:34.264 12:20:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.264 12:20:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:34.264 12:20:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:34.523 12:20:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.523 12:20:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:21:34.523 12:20:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:34.523 12:20:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.523 12:20:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:21:34.523 12:20:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:34.523 12:20:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.523 12:20:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 548a29ed-498a-4139-abc0-e5bf31e39c75 00:21:34.523 12:20:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.523 12:20:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:34.523 [2024-11-25 12:20:30.503402] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:21:34.523 [2024-11-25 12:20:30.503713] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:34.523 [2024-11-25 12:20:30.503731] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:34.523 [2024-11-25 12:20:30.504057] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:21:34.523 NewBaseBdev 00:21:34.523 12:20:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.523 12:20:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:21:34.523 12:20:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:21:34.523 12:20:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:34.523 12:20:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:34.523 12:20:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:34.524 12:20:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:34.524 12:20:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:34.524 12:20:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.524 12:20:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:34.524 [2024-11-25 12:20:30.510585] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:34.524 [2024-11-25 12:20:30.510623] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:21:34.524 [2024-11-25 12:20:30.510916] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:34.524 12:20:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.524 12:20:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:21:34.524 12:20:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.524 12:20:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:34.524 [ 00:21:34.524 { 00:21:34.524 "name": "NewBaseBdev", 00:21:34.524 "aliases": [ 00:21:34.524 "548a29ed-498a-4139-abc0-e5bf31e39c75" 00:21:34.524 ], 00:21:34.524 "product_name": "Malloc disk", 00:21:34.524 "block_size": 512, 00:21:34.524 "num_blocks": 65536, 00:21:34.524 "uuid": "548a29ed-498a-4139-abc0-e5bf31e39c75", 00:21:34.524 "assigned_rate_limits": { 00:21:34.524 "rw_ios_per_sec": 0, 00:21:34.524 "rw_mbytes_per_sec": 0, 00:21:34.524 "r_mbytes_per_sec": 0, 00:21:34.524 "w_mbytes_per_sec": 0 00:21:34.524 }, 00:21:34.524 "claimed": true, 00:21:34.524 "claim_type": "exclusive_write", 00:21:34.524 "zoned": false, 00:21:34.524 "supported_io_types": { 00:21:34.524 "read": true, 00:21:34.524 "write": true, 00:21:34.524 "unmap": true, 00:21:34.524 "flush": true, 00:21:34.524 "reset": true, 00:21:34.524 "nvme_admin": false, 00:21:34.524 "nvme_io": false, 00:21:34.524 "nvme_io_md": false, 00:21:34.524 "write_zeroes": true, 00:21:34.524 "zcopy": true, 00:21:34.524 "get_zone_info": false, 00:21:34.524 "zone_management": false, 00:21:34.524 "zone_append": false, 00:21:34.524 "compare": false, 00:21:34.524 "compare_and_write": false, 00:21:34.524 "abort": true, 00:21:34.524 "seek_hole": false, 00:21:34.524 "seek_data": false, 00:21:34.524 "copy": true, 00:21:34.524 "nvme_iov_md": false 00:21:34.524 }, 00:21:34.524 "memory_domains": [ 00:21:34.524 { 00:21:34.524 "dma_device_id": "system", 00:21:34.524 "dma_device_type": 1 00:21:34.524 }, 00:21:34.524 { 00:21:34.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:34.524 "dma_device_type": 2 00:21:34.524 } 00:21:34.524 ], 00:21:34.524 "driver_specific": {} 00:21:34.524 } 00:21:34.524 ] 00:21:34.524 12:20:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.524 12:20:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:34.524 12:20:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:21:34.524 12:20:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:34.524 12:20:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:34.524 12:20:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:34.524 12:20:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:34.524 12:20:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:34.524 12:20:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:34.524 12:20:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:34.524 12:20:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:34.524 12:20:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:34.524 12:20:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:34.524 12:20:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:34.524 12:20:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.524 12:20:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:34.524 12:20:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.524 12:20:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:34.524 "name": "Existed_Raid", 00:21:34.524 "uuid": "25be7f17-92a2-4abf-b128-f0d73fd1fbce", 00:21:34.524 "strip_size_kb": 64, 00:21:34.524 "state": "online", 00:21:34.524 "raid_level": "raid5f", 00:21:34.524 "superblock": true, 00:21:34.524 "num_base_bdevs": 4, 00:21:34.524 "num_base_bdevs_discovered": 4, 00:21:34.524 "num_base_bdevs_operational": 4, 00:21:34.524 "base_bdevs_list": [ 00:21:34.524 { 00:21:34.524 "name": "NewBaseBdev", 00:21:34.524 "uuid": "548a29ed-498a-4139-abc0-e5bf31e39c75", 00:21:34.524 "is_configured": true, 00:21:34.524 "data_offset": 2048, 00:21:34.524 "data_size": 63488 00:21:34.524 }, 00:21:34.524 { 00:21:34.524 "name": "BaseBdev2", 00:21:34.524 "uuid": "19e48426-d9e5-41a1-a0ab-afd18f0a22dc", 00:21:34.524 "is_configured": true, 00:21:34.524 "data_offset": 2048, 00:21:34.524 "data_size": 63488 00:21:34.524 }, 00:21:34.524 { 00:21:34.524 "name": "BaseBdev3", 00:21:34.524 "uuid": "84c85fe3-fb7e-45e3-b63a-e24ddf32f7c6", 00:21:34.524 "is_configured": true, 00:21:34.524 "data_offset": 2048, 00:21:34.524 "data_size": 63488 00:21:34.524 }, 00:21:34.524 { 00:21:34.524 "name": "BaseBdev4", 00:21:34.524 "uuid": "3d4e213a-68df-4a45-9381-42a0643e82c4", 00:21:34.524 "is_configured": true, 00:21:34.524 "data_offset": 2048, 00:21:34.524 "data_size": 63488 00:21:34.524 } 00:21:34.524 ] 00:21:34.524 }' 00:21:34.524 12:20:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:34.524 12:20:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:35.090 12:20:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:21:35.090 12:20:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:35.090 12:20:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:35.090 12:20:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:35.090 12:20:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:21:35.090 12:20:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:35.090 12:20:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:35.090 12:20:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.090 12:20:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:35.090 12:20:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:35.090 [2024-11-25 12:20:31.042884] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:35.090 12:20:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.090 12:20:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:35.090 "name": "Existed_Raid", 00:21:35.090 "aliases": [ 00:21:35.090 "25be7f17-92a2-4abf-b128-f0d73fd1fbce" 00:21:35.090 ], 00:21:35.090 "product_name": "Raid Volume", 00:21:35.090 "block_size": 512, 00:21:35.090 "num_blocks": 190464, 00:21:35.090 "uuid": "25be7f17-92a2-4abf-b128-f0d73fd1fbce", 00:21:35.090 "assigned_rate_limits": { 00:21:35.090 "rw_ios_per_sec": 0, 00:21:35.090 "rw_mbytes_per_sec": 0, 00:21:35.090 "r_mbytes_per_sec": 0, 00:21:35.090 "w_mbytes_per_sec": 0 00:21:35.090 }, 00:21:35.090 "claimed": false, 00:21:35.090 "zoned": false, 00:21:35.090 "supported_io_types": { 00:21:35.090 "read": true, 00:21:35.090 "write": true, 00:21:35.090 "unmap": false, 00:21:35.090 "flush": false, 00:21:35.090 "reset": true, 00:21:35.091 "nvme_admin": false, 00:21:35.091 "nvme_io": false, 00:21:35.091 "nvme_io_md": false, 00:21:35.091 "write_zeroes": true, 00:21:35.091 "zcopy": false, 00:21:35.091 "get_zone_info": false, 00:21:35.091 "zone_management": false, 00:21:35.091 "zone_append": false, 00:21:35.091 "compare": false, 00:21:35.091 "compare_and_write": false, 00:21:35.091 "abort": false, 00:21:35.091 "seek_hole": false, 00:21:35.091 "seek_data": false, 00:21:35.091 "copy": false, 00:21:35.091 "nvme_iov_md": false 00:21:35.091 }, 00:21:35.091 "driver_specific": { 00:21:35.091 "raid": { 00:21:35.091 "uuid": "25be7f17-92a2-4abf-b128-f0d73fd1fbce", 00:21:35.091 "strip_size_kb": 64, 00:21:35.091 "state": "online", 00:21:35.091 "raid_level": "raid5f", 00:21:35.091 "superblock": true, 00:21:35.091 "num_base_bdevs": 4, 00:21:35.091 "num_base_bdevs_discovered": 4, 00:21:35.091 "num_base_bdevs_operational": 4, 00:21:35.091 "base_bdevs_list": [ 00:21:35.091 { 00:21:35.091 "name": "NewBaseBdev", 00:21:35.091 "uuid": "548a29ed-498a-4139-abc0-e5bf31e39c75", 00:21:35.091 "is_configured": true, 00:21:35.091 "data_offset": 2048, 00:21:35.091 "data_size": 63488 00:21:35.091 }, 00:21:35.091 { 00:21:35.091 "name": "BaseBdev2", 00:21:35.091 "uuid": "19e48426-d9e5-41a1-a0ab-afd18f0a22dc", 00:21:35.091 "is_configured": true, 00:21:35.091 "data_offset": 2048, 00:21:35.091 "data_size": 63488 00:21:35.091 }, 00:21:35.091 { 00:21:35.091 "name": "BaseBdev3", 00:21:35.091 "uuid": "84c85fe3-fb7e-45e3-b63a-e24ddf32f7c6", 00:21:35.091 "is_configured": true, 00:21:35.091 "data_offset": 2048, 00:21:35.091 "data_size": 63488 00:21:35.091 }, 00:21:35.091 { 00:21:35.091 "name": "BaseBdev4", 00:21:35.091 "uuid": "3d4e213a-68df-4a45-9381-42a0643e82c4", 00:21:35.091 "is_configured": true, 00:21:35.091 "data_offset": 2048, 00:21:35.091 "data_size": 63488 00:21:35.091 } 00:21:35.091 ] 00:21:35.091 } 00:21:35.091 } 00:21:35.091 }' 00:21:35.091 12:20:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:35.091 12:20:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:21:35.091 BaseBdev2 00:21:35.091 BaseBdev3 00:21:35.091 BaseBdev4' 00:21:35.091 12:20:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:35.350 12:20:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:35.350 12:20:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:35.350 12:20:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:21:35.350 12:20:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.350 12:20:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:35.350 12:20:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:35.350 12:20:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.350 12:20:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:35.350 12:20:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:35.350 12:20:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:35.350 12:20:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:35.350 12:20:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.350 12:20:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:35.350 12:20:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:35.350 12:20:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.350 12:20:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:35.350 12:20:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:35.350 12:20:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:35.350 12:20:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:21:35.350 12:20:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.350 12:20:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:35.350 12:20:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:35.350 12:20:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.350 12:20:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:35.350 12:20:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:35.350 12:20:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:35.350 12:20:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:21:35.350 12:20:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:35.350 12:20:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.350 12:20:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:35.350 12:20:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.350 12:20:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:35.350 12:20:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:35.350 12:20:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:35.350 12:20:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.350 12:20:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:35.350 [2024-11-25 12:20:31.410675] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:35.350 [2024-11-25 12:20:31.410722] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:35.350 [2024-11-25 12:20:31.410881] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:35.350 [2024-11-25 12:20:31.411413] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:35.350 [2024-11-25 12:20:31.411461] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:21:35.350 12:20:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.350 12:20:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83835 00:21:35.350 12:20:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 83835 ']' 00:21:35.350 12:20:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 83835 00:21:35.350 12:20:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:21:35.350 12:20:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:35.350 12:20:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83835 00:21:35.609 killing process with pid 83835 00:21:35.609 12:20:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:35.609 12:20:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:35.609 12:20:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83835' 00:21:35.609 12:20:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 83835 00:21:35.609 [2024-11-25 12:20:31.446050] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:35.609 12:20:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 83835 00:21:35.868 [2024-11-25 12:20:31.837561] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:36.804 12:20:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:21:36.804 00:21:36.804 real 0m13.023s 00:21:36.805 user 0m21.538s 00:21:36.805 sys 0m1.835s 00:21:36.805 12:20:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:36.805 12:20:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:36.805 ************************************ 00:21:36.805 END TEST raid5f_state_function_test_sb 00:21:36.805 ************************************ 00:21:37.064 12:20:32 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:21:37.064 12:20:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:21:37.064 12:20:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:37.064 12:20:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:37.064 ************************************ 00:21:37.064 START TEST raid5f_superblock_test 00:21:37.064 ************************************ 00:21:37.064 12:20:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:21:37.064 12:20:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:21:37.064 12:20:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:21:37.064 12:20:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:21:37.064 12:20:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:21:37.064 12:20:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:21:37.064 12:20:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:21:37.064 12:20:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:21:37.064 12:20:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:21:37.064 12:20:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:21:37.064 12:20:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:21:37.064 12:20:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:21:37.064 12:20:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:21:37.064 12:20:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:21:37.064 12:20:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:21:37.064 12:20:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:21:37.064 12:20:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:21:37.064 12:20:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84517 00:21:37.064 12:20:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84517 00:21:37.064 12:20:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 84517 ']' 00:21:37.064 12:20:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:37.064 12:20:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:37.064 12:20:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:37.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:37.064 12:20:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:37.064 12:20:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:37.064 12:20:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:21:37.064 [2024-11-25 12:20:33.050210] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:21:37.064 [2024-11-25 12:20:33.050455] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84517 ] 00:21:37.322 [2024-11-25 12:20:33.242424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:37.322 [2024-11-25 12:20:33.395763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:37.581 [2024-11-25 12:20:33.615105] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:37.581 [2024-11-25 12:20:33.615170] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:38.153 12:20:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:38.153 12:20:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:21:38.153 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:21:38.153 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:38.153 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:21:38.153 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:21:38.153 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:38.153 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:38.153 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:38.153 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:38.153 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:21:38.153 12:20:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.153 12:20:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.153 malloc1 00:21:38.153 12:20:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.153 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:38.153 12:20:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.153 12:20:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.153 [2024-11-25 12:20:34.147097] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:38.153 [2024-11-25 12:20:34.147307] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:38.153 [2024-11-25 12:20:34.147411] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:38.153 [2024-11-25 12:20:34.147582] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:38.154 [2024-11-25 12:20:34.150396] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:38.154 [2024-11-25 12:20:34.150463] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:38.154 pt1 00:21:38.154 12:20:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.154 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:38.154 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:38.154 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:21:38.154 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:21:38.154 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:38.154 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:38.154 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:38.154 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:38.154 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:21:38.154 12:20:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.154 12:20:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.154 malloc2 00:21:38.154 12:20:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.154 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:38.154 12:20:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.154 12:20:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.154 [2024-11-25 12:20:34.203113] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:38.154 [2024-11-25 12:20:34.203313] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:38.154 [2024-11-25 12:20:34.203374] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:38.154 [2024-11-25 12:20:34.203392] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:38.154 [2024-11-25 12:20:34.206107] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:38.154 [2024-11-25 12:20:34.206151] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:38.154 pt2 00:21:38.154 12:20:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.154 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:38.154 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:38.154 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:21:38.154 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:21:38.154 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:21:38.154 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:38.154 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:38.154 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:38.154 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:21:38.154 12:20:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.154 12:20:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.413 malloc3 00:21:38.413 12:20:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.413 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:38.413 12:20:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.413 12:20:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.413 [2024-11-25 12:20:34.273841] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:38.413 [2024-11-25 12:20:34.273919] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:38.413 [2024-11-25 12:20:34.273969] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:38.413 [2024-11-25 12:20:34.273985] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:38.413 [2024-11-25 12:20:34.276779] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:38.413 [2024-11-25 12:20:34.276820] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:38.413 pt3 00:21:38.413 12:20:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.413 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:38.413 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:38.413 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:21:38.413 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:21:38.413 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:21:38.413 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:38.414 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:38.414 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:38.414 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:21:38.414 12:20:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.414 12:20:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.414 malloc4 00:21:38.414 12:20:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.414 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:21:38.414 12:20:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.414 12:20:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.414 [2024-11-25 12:20:34.329462] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:21:38.414 [2024-11-25 12:20:34.329652] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:38.414 [2024-11-25 12:20:34.329727] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:21:38.414 [2024-11-25 12:20:34.329834] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:38.414 [2024-11-25 12:20:34.332862] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:38.414 [2024-11-25 12:20:34.333042] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:21:38.414 pt4 00:21:38.414 12:20:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.414 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:38.414 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:38.414 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:21:38.414 12:20:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.414 12:20:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.414 [2024-11-25 12:20:34.341503] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:38.414 [2024-11-25 12:20:34.344055] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:38.414 [2024-11-25 12:20:34.344284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:38.414 [2024-11-25 12:20:34.344439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:21:38.414 [2024-11-25 12:20:34.344771] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:38.414 [2024-11-25 12:20:34.344891] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:38.414 [2024-11-25 12:20:34.345233] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:38.414 [2024-11-25 12:20:34.352032] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:38.414 [2024-11-25 12:20:34.352173] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:38.414 [2024-11-25 12:20:34.352437] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:38.414 12:20:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.414 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:21:38.414 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:38.414 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:38.414 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:38.414 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:38.414 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:38.414 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:38.414 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:38.414 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:38.414 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:38.414 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:38.414 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:38.414 12:20:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.414 12:20:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.414 12:20:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.414 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:38.414 "name": "raid_bdev1", 00:21:38.414 "uuid": "7502aa39-b5c3-4f27-9c2f-15fdcd66f1b2", 00:21:38.414 "strip_size_kb": 64, 00:21:38.414 "state": "online", 00:21:38.414 "raid_level": "raid5f", 00:21:38.414 "superblock": true, 00:21:38.414 "num_base_bdevs": 4, 00:21:38.414 "num_base_bdevs_discovered": 4, 00:21:38.414 "num_base_bdevs_operational": 4, 00:21:38.414 "base_bdevs_list": [ 00:21:38.414 { 00:21:38.414 "name": "pt1", 00:21:38.414 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:38.414 "is_configured": true, 00:21:38.414 "data_offset": 2048, 00:21:38.414 "data_size": 63488 00:21:38.414 }, 00:21:38.414 { 00:21:38.414 "name": "pt2", 00:21:38.414 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:38.414 "is_configured": true, 00:21:38.414 "data_offset": 2048, 00:21:38.414 "data_size": 63488 00:21:38.414 }, 00:21:38.414 { 00:21:38.414 "name": "pt3", 00:21:38.414 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:38.414 "is_configured": true, 00:21:38.414 "data_offset": 2048, 00:21:38.414 "data_size": 63488 00:21:38.414 }, 00:21:38.414 { 00:21:38.414 "name": "pt4", 00:21:38.414 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:38.414 "is_configured": true, 00:21:38.414 "data_offset": 2048, 00:21:38.414 "data_size": 63488 00:21:38.414 } 00:21:38.414 ] 00:21:38.414 }' 00:21:38.414 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:38.414 12:20:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.983 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:21:38.983 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:38.983 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:38.983 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:38.983 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:38.983 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:38.983 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:38.983 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:38.983 12:20:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.983 12:20:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.983 [2024-11-25 12:20:34.840328] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:38.983 12:20:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.983 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:38.983 "name": "raid_bdev1", 00:21:38.983 "aliases": [ 00:21:38.983 "7502aa39-b5c3-4f27-9c2f-15fdcd66f1b2" 00:21:38.983 ], 00:21:38.983 "product_name": "Raid Volume", 00:21:38.983 "block_size": 512, 00:21:38.983 "num_blocks": 190464, 00:21:38.983 "uuid": "7502aa39-b5c3-4f27-9c2f-15fdcd66f1b2", 00:21:38.983 "assigned_rate_limits": { 00:21:38.983 "rw_ios_per_sec": 0, 00:21:38.983 "rw_mbytes_per_sec": 0, 00:21:38.983 "r_mbytes_per_sec": 0, 00:21:38.983 "w_mbytes_per_sec": 0 00:21:38.983 }, 00:21:38.983 "claimed": false, 00:21:38.983 "zoned": false, 00:21:38.983 "supported_io_types": { 00:21:38.983 "read": true, 00:21:38.983 "write": true, 00:21:38.983 "unmap": false, 00:21:38.983 "flush": false, 00:21:38.983 "reset": true, 00:21:38.983 "nvme_admin": false, 00:21:38.983 "nvme_io": false, 00:21:38.983 "nvme_io_md": false, 00:21:38.983 "write_zeroes": true, 00:21:38.983 "zcopy": false, 00:21:38.983 "get_zone_info": false, 00:21:38.983 "zone_management": false, 00:21:38.983 "zone_append": false, 00:21:38.983 "compare": false, 00:21:38.983 "compare_and_write": false, 00:21:38.983 "abort": false, 00:21:38.983 "seek_hole": false, 00:21:38.983 "seek_data": false, 00:21:38.983 "copy": false, 00:21:38.983 "nvme_iov_md": false 00:21:38.983 }, 00:21:38.983 "driver_specific": { 00:21:38.983 "raid": { 00:21:38.983 "uuid": "7502aa39-b5c3-4f27-9c2f-15fdcd66f1b2", 00:21:38.983 "strip_size_kb": 64, 00:21:38.983 "state": "online", 00:21:38.983 "raid_level": "raid5f", 00:21:38.983 "superblock": true, 00:21:38.983 "num_base_bdevs": 4, 00:21:38.983 "num_base_bdevs_discovered": 4, 00:21:38.983 "num_base_bdevs_operational": 4, 00:21:38.983 "base_bdevs_list": [ 00:21:38.983 { 00:21:38.983 "name": "pt1", 00:21:38.983 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:38.983 "is_configured": true, 00:21:38.983 "data_offset": 2048, 00:21:38.983 "data_size": 63488 00:21:38.983 }, 00:21:38.983 { 00:21:38.983 "name": "pt2", 00:21:38.983 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:38.983 "is_configured": true, 00:21:38.983 "data_offset": 2048, 00:21:38.983 "data_size": 63488 00:21:38.983 }, 00:21:38.983 { 00:21:38.983 "name": "pt3", 00:21:38.983 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:38.983 "is_configured": true, 00:21:38.983 "data_offset": 2048, 00:21:38.983 "data_size": 63488 00:21:38.983 }, 00:21:38.983 { 00:21:38.983 "name": "pt4", 00:21:38.983 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:38.983 "is_configured": true, 00:21:38.983 "data_offset": 2048, 00:21:38.983 "data_size": 63488 00:21:38.983 } 00:21:38.983 ] 00:21:38.983 } 00:21:38.983 } 00:21:38.983 }' 00:21:38.983 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:38.983 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:38.983 pt2 00:21:38.983 pt3 00:21:38.983 pt4' 00:21:38.983 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:38.983 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:38.983 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:38.983 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:38.983 12:20:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:38.983 12:20:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.983 12:20:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.983 12:20:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.983 12:20:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:38.983 12:20:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:38.983 12:20:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:38.983 12:20:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:38.983 12:20:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.983 12:20:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.983 12:20:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:38.983 12:20:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.242 12:20:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:39.242 12:20:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:39.242 12:20:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:39.242 12:20:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:39.242 12:20:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:21:39.242 12:20:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.242 12:20:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.242 12:20:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.242 12:20:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:39.242 12:20:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:39.242 12:20:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:39.242 12:20:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:21:39.242 12:20:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:39.242 12:20:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.242 12:20:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.242 12:20:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.242 12:20:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:39.242 12:20:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:39.242 12:20:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:39.242 12:20:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:21:39.242 12:20:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.242 12:20:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.242 [2024-11-25 12:20:35.212367] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:39.242 12:20:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.242 12:20:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7502aa39-b5c3-4f27-9c2f-15fdcd66f1b2 00:21:39.242 12:20:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 7502aa39-b5c3-4f27-9c2f-15fdcd66f1b2 ']' 00:21:39.242 12:20:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:39.242 12:20:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.242 12:20:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.242 [2024-11-25 12:20:35.260182] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:39.242 [2024-11-25 12:20:35.260212] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:39.242 [2024-11-25 12:20:35.260328] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:39.242 [2024-11-25 12:20:35.260477] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:39.242 [2024-11-25 12:20:35.260504] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:39.242 12:20:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.242 12:20:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:39.242 12:20:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.242 12:20:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.242 12:20:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:21:39.242 12:20:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.242 12:20:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:21:39.242 12:20:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:21:39.242 12:20:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:39.242 12:20:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:21:39.242 12:20:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.242 12:20:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.501 12:20:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.501 12:20:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:39.501 12:20:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:21:39.501 12:20:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.501 12:20:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.501 12:20:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.501 12:20:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:39.501 12:20:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:21:39.501 12:20:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.501 12:20:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.501 12:20:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.501 12:20:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:39.501 12:20:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:21:39.501 12:20:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.501 12:20:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.501 12:20:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.501 12:20:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:21:39.501 12:20:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.501 12:20:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.501 12:20:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:39.501 12:20:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.501 12:20:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:21:39.501 12:20:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:21:39.501 12:20:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:21:39.501 12:20:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:21:39.501 12:20:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:39.501 12:20:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:39.501 12:20:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:39.501 12:20:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:39.501 12:20:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:21:39.501 12:20:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.501 12:20:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.501 [2024-11-25 12:20:35.428246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:39.501 [2024-11-25 12:20:35.430891] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:39.501 [2024-11-25 12:20:35.430961] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:21:39.501 [2024-11-25 12:20:35.431014] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:21:39.501 [2024-11-25 12:20:35.431093] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:39.501 [2024-11-25 12:20:35.431162] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:39.501 [2024-11-25 12:20:35.431198] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:21:39.501 [2024-11-25 12:20:35.431232] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:21:39.501 [2024-11-25 12:20:35.431254] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:39.501 [2024-11-25 12:20:35.431273] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:21:39.501 request: 00:21:39.501 { 00:21:39.501 "name": "raid_bdev1", 00:21:39.501 "raid_level": "raid5f", 00:21:39.501 "base_bdevs": [ 00:21:39.501 "malloc1", 00:21:39.501 "malloc2", 00:21:39.501 "malloc3", 00:21:39.501 "malloc4" 00:21:39.501 ], 00:21:39.501 "strip_size_kb": 64, 00:21:39.501 "superblock": false, 00:21:39.501 "method": "bdev_raid_create", 00:21:39.501 "req_id": 1 00:21:39.501 } 00:21:39.501 Got JSON-RPC error response 00:21:39.501 response: 00:21:39.501 { 00:21:39.501 "code": -17, 00:21:39.501 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:39.501 } 00:21:39.501 12:20:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:39.501 12:20:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:21:39.501 12:20:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:39.501 12:20:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:39.501 12:20:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:39.501 12:20:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:39.501 12:20:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:21:39.501 12:20:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.501 12:20:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.501 12:20:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.501 12:20:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:21:39.501 12:20:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:21:39.501 12:20:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:39.501 12:20:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.501 12:20:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.501 [2024-11-25 12:20:35.500262] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:39.501 [2024-11-25 12:20:35.500331] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:39.501 [2024-11-25 12:20:35.500380] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:21:39.501 [2024-11-25 12:20:35.500400] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:39.501 [2024-11-25 12:20:35.503383] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:39.501 [2024-11-25 12:20:35.503449] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:39.501 [2024-11-25 12:20:35.503551] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:39.501 [2024-11-25 12:20:35.503628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:39.501 pt1 00:21:39.501 12:20:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.501 12:20:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:21:39.501 12:20:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:39.501 12:20:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:39.501 12:20:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:39.501 12:20:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:39.501 12:20:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:39.501 12:20:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:39.501 12:20:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:39.501 12:20:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:39.501 12:20:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:39.501 12:20:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:39.501 12:20:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:39.501 12:20:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.501 12:20:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.501 12:20:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.501 12:20:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:39.501 "name": "raid_bdev1", 00:21:39.501 "uuid": "7502aa39-b5c3-4f27-9c2f-15fdcd66f1b2", 00:21:39.501 "strip_size_kb": 64, 00:21:39.501 "state": "configuring", 00:21:39.501 "raid_level": "raid5f", 00:21:39.501 "superblock": true, 00:21:39.501 "num_base_bdevs": 4, 00:21:39.501 "num_base_bdevs_discovered": 1, 00:21:39.501 "num_base_bdevs_operational": 4, 00:21:39.501 "base_bdevs_list": [ 00:21:39.501 { 00:21:39.501 "name": "pt1", 00:21:39.501 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:39.501 "is_configured": true, 00:21:39.501 "data_offset": 2048, 00:21:39.501 "data_size": 63488 00:21:39.501 }, 00:21:39.501 { 00:21:39.501 "name": null, 00:21:39.501 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:39.501 "is_configured": false, 00:21:39.501 "data_offset": 2048, 00:21:39.501 "data_size": 63488 00:21:39.501 }, 00:21:39.501 { 00:21:39.501 "name": null, 00:21:39.501 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:39.501 "is_configured": false, 00:21:39.501 "data_offset": 2048, 00:21:39.502 "data_size": 63488 00:21:39.502 }, 00:21:39.502 { 00:21:39.502 "name": null, 00:21:39.502 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:39.502 "is_configured": false, 00:21:39.502 "data_offset": 2048, 00:21:39.502 "data_size": 63488 00:21:39.502 } 00:21:39.502 ] 00:21:39.502 }' 00:21:39.502 12:20:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:39.502 12:20:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.070 12:20:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:21:40.070 12:20:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:40.070 12:20:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.070 12:20:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.070 [2024-11-25 12:20:36.024444] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:40.070 [2024-11-25 12:20:36.024543] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:40.070 [2024-11-25 12:20:36.024574] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:21:40.070 [2024-11-25 12:20:36.024592] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:40.070 [2024-11-25 12:20:36.025161] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:40.070 [2024-11-25 12:20:36.025200] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:40.070 [2024-11-25 12:20:36.025301] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:40.070 [2024-11-25 12:20:36.025353] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:40.070 pt2 00:21:40.070 12:20:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.070 12:20:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:21:40.070 12:20:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.070 12:20:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.070 [2024-11-25 12:20:36.032442] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:21:40.070 12:20:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.070 12:20:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:21:40.070 12:20:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:40.070 12:20:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:40.070 12:20:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:40.070 12:20:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:40.070 12:20:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:40.070 12:20:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:40.070 12:20:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:40.070 12:20:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:40.070 12:20:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:40.070 12:20:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:40.070 12:20:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:40.070 12:20:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.070 12:20:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.070 12:20:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.070 12:20:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:40.070 "name": "raid_bdev1", 00:21:40.070 "uuid": "7502aa39-b5c3-4f27-9c2f-15fdcd66f1b2", 00:21:40.070 "strip_size_kb": 64, 00:21:40.070 "state": "configuring", 00:21:40.070 "raid_level": "raid5f", 00:21:40.070 "superblock": true, 00:21:40.070 "num_base_bdevs": 4, 00:21:40.070 "num_base_bdevs_discovered": 1, 00:21:40.070 "num_base_bdevs_operational": 4, 00:21:40.070 "base_bdevs_list": [ 00:21:40.070 { 00:21:40.070 "name": "pt1", 00:21:40.070 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:40.070 "is_configured": true, 00:21:40.070 "data_offset": 2048, 00:21:40.070 "data_size": 63488 00:21:40.070 }, 00:21:40.070 { 00:21:40.070 "name": null, 00:21:40.070 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:40.070 "is_configured": false, 00:21:40.070 "data_offset": 0, 00:21:40.070 "data_size": 63488 00:21:40.070 }, 00:21:40.070 { 00:21:40.070 "name": null, 00:21:40.070 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:40.070 "is_configured": false, 00:21:40.070 "data_offset": 2048, 00:21:40.070 "data_size": 63488 00:21:40.070 }, 00:21:40.070 { 00:21:40.070 "name": null, 00:21:40.070 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:40.070 "is_configured": false, 00:21:40.070 "data_offset": 2048, 00:21:40.070 "data_size": 63488 00:21:40.070 } 00:21:40.070 ] 00:21:40.070 }' 00:21:40.070 12:20:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:40.070 12:20:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.639 12:20:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:21:40.639 12:20:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:40.639 12:20:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:40.639 12:20:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.639 12:20:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.639 [2024-11-25 12:20:36.556615] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:40.639 [2024-11-25 12:20:36.556694] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:40.639 [2024-11-25 12:20:36.556734] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:21:40.639 [2024-11-25 12:20:36.556770] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:40.639 [2024-11-25 12:20:36.557432] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:40.639 [2024-11-25 12:20:36.557459] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:40.639 [2024-11-25 12:20:36.557575] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:40.639 [2024-11-25 12:20:36.557607] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:40.639 pt2 00:21:40.639 12:20:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.639 12:20:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:40.639 12:20:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:40.639 12:20:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:40.639 12:20:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.639 12:20:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.639 [2024-11-25 12:20:36.568589] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:40.639 [2024-11-25 12:20:36.568649] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:40.639 [2024-11-25 12:20:36.568684] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:21:40.639 [2024-11-25 12:20:36.568698] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:40.639 [2024-11-25 12:20:36.569161] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:40.639 [2024-11-25 12:20:36.569193] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:40.639 [2024-11-25 12:20:36.569275] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:40.639 [2024-11-25 12:20:36.569304] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:40.639 pt3 00:21:40.639 12:20:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.639 12:20:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:40.639 12:20:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:40.639 12:20:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:21:40.639 12:20:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.639 12:20:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.639 [2024-11-25 12:20:36.576551] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:21:40.639 [2024-11-25 12:20:36.576611] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:40.639 [2024-11-25 12:20:36.576639] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:21:40.639 [2024-11-25 12:20:36.576653] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:40.639 [2024-11-25 12:20:36.577106] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:40.639 [2024-11-25 12:20:36.577130] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:21:40.639 [2024-11-25 12:20:36.577210] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:21:40.639 [2024-11-25 12:20:36.577237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:21:40.639 [2024-11-25 12:20:36.577445] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:40.639 [2024-11-25 12:20:36.577462] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:40.639 [2024-11-25 12:20:36.577753] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:40.639 [2024-11-25 12:20:36.584467] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:40.639 [2024-11-25 12:20:36.584499] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:21:40.639 [2024-11-25 12:20:36.584720] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:40.639 pt4 00:21:40.639 12:20:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.639 12:20:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:40.639 12:20:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:40.639 12:20:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:21:40.639 12:20:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:40.639 12:20:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:40.639 12:20:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:40.639 12:20:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:40.639 12:20:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:40.639 12:20:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:40.639 12:20:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:40.639 12:20:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:40.639 12:20:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:40.639 12:20:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:40.639 12:20:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:40.639 12:20:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.639 12:20:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.639 12:20:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.639 12:20:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:40.639 "name": "raid_bdev1", 00:21:40.639 "uuid": "7502aa39-b5c3-4f27-9c2f-15fdcd66f1b2", 00:21:40.639 "strip_size_kb": 64, 00:21:40.639 "state": "online", 00:21:40.639 "raid_level": "raid5f", 00:21:40.639 "superblock": true, 00:21:40.639 "num_base_bdevs": 4, 00:21:40.639 "num_base_bdevs_discovered": 4, 00:21:40.639 "num_base_bdevs_operational": 4, 00:21:40.639 "base_bdevs_list": [ 00:21:40.639 { 00:21:40.639 "name": "pt1", 00:21:40.639 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:40.639 "is_configured": true, 00:21:40.639 "data_offset": 2048, 00:21:40.639 "data_size": 63488 00:21:40.639 }, 00:21:40.639 { 00:21:40.639 "name": "pt2", 00:21:40.639 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:40.639 "is_configured": true, 00:21:40.639 "data_offset": 2048, 00:21:40.639 "data_size": 63488 00:21:40.640 }, 00:21:40.640 { 00:21:40.640 "name": "pt3", 00:21:40.640 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:40.640 "is_configured": true, 00:21:40.640 "data_offset": 2048, 00:21:40.640 "data_size": 63488 00:21:40.640 }, 00:21:40.640 { 00:21:40.640 "name": "pt4", 00:21:40.640 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:40.640 "is_configured": true, 00:21:40.640 "data_offset": 2048, 00:21:40.640 "data_size": 63488 00:21:40.640 } 00:21:40.640 ] 00:21:40.640 }' 00:21:40.640 12:20:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:40.640 12:20:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.209 12:20:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:21:41.209 12:20:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:41.209 12:20:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:41.209 12:20:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:41.209 12:20:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:41.209 12:20:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:41.209 12:20:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:41.209 12:20:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.209 12:20:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.209 12:20:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:41.209 [2024-11-25 12:20:37.124714] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:41.209 12:20:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.209 12:20:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:41.209 "name": "raid_bdev1", 00:21:41.209 "aliases": [ 00:21:41.209 "7502aa39-b5c3-4f27-9c2f-15fdcd66f1b2" 00:21:41.209 ], 00:21:41.209 "product_name": "Raid Volume", 00:21:41.209 "block_size": 512, 00:21:41.209 "num_blocks": 190464, 00:21:41.209 "uuid": "7502aa39-b5c3-4f27-9c2f-15fdcd66f1b2", 00:21:41.209 "assigned_rate_limits": { 00:21:41.209 "rw_ios_per_sec": 0, 00:21:41.209 "rw_mbytes_per_sec": 0, 00:21:41.209 "r_mbytes_per_sec": 0, 00:21:41.209 "w_mbytes_per_sec": 0 00:21:41.209 }, 00:21:41.209 "claimed": false, 00:21:41.209 "zoned": false, 00:21:41.209 "supported_io_types": { 00:21:41.209 "read": true, 00:21:41.209 "write": true, 00:21:41.209 "unmap": false, 00:21:41.209 "flush": false, 00:21:41.209 "reset": true, 00:21:41.209 "nvme_admin": false, 00:21:41.209 "nvme_io": false, 00:21:41.209 "nvme_io_md": false, 00:21:41.209 "write_zeroes": true, 00:21:41.209 "zcopy": false, 00:21:41.209 "get_zone_info": false, 00:21:41.209 "zone_management": false, 00:21:41.209 "zone_append": false, 00:21:41.209 "compare": false, 00:21:41.209 "compare_and_write": false, 00:21:41.209 "abort": false, 00:21:41.209 "seek_hole": false, 00:21:41.209 "seek_data": false, 00:21:41.209 "copy": false, 00:21:41.209 "nvme_iov_md": false 00:21:41.209 }, 00:21:41.209 "driver_specific": { 00:21:41.209 "raid": { 00:21:41.209 "uuid": "7502aa39-b5c3-4f27-9c2f-15fdcd66f1b2", 00:21:41.209 "strip_size_kb": 64, 00:21:41.209 "state": "online", 00:21:41.209 "raid_level": "raid5f", 00:21:41.209 "superblock": true, 00:21:41.209 "num_base_bdevs": 4, 00:21:41.209 "num_base_bdevs_discovered": 4, 00:21:41.209 "num_base_bdevs_operational": 4, 00:21:41.209 "base_bdevs_list": [ 00:21:41.209 { 00:21:41.209 "name": "pt1", 00:21:41.209 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:41.209 "is_configured": true, 00:21:41.209 "data_offset": 2048, 00:21:41.209 "data_size": 63488 00:21:41.209 }, 00:21:41.209 { 00:21:41.209 "name": "pt2", 00:21:41.209 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:41.209 "is_configured": true, 00:21:41.209 "data_offset": 2048, 00:21:41.209 "data_size": 63488 00:21:41.209 }, 00:21:41.209 { 00:21:41.209 "name": "pt3", 00:21:41.209 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:41.209 "is_configured": true, 00:21:41.209 "data_offset": 2048, 00:21:41.209 "data_size": 63488 00:21:41.209 }, 00:21:41.209 { 00:21:41.209 "name": "pt4", 00:21:41.209 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:41.209 "is_configured": true, 00:21:41.209 "data_offset": 2048, 00:21:41.209 "data_size": 63488 00:21:41.209 } 00:21:41.209 ] 00:21:41.209 } 00:21:41.209 } 00:21:41.209 }' 00:21:41.209 12:20:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:41.209 12:20:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:41.209 pt2 00:21:41.209 pt3 00:21:41.209 pt4' 00:21:41.209 12:20:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:41.209 12:20:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:41.209 12:20:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:41.209 12:20:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:41.209 12:20:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:41.209 12:20:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.209 12:20:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.209 12:20:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.469 12:20:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:41.469 12:20:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:41.469 12:20:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:41.469 12:20:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:41.469 12:20:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:41.469 12:20:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.469 12:20:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.469 12:20:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.469 12:20:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:41.469 12:20:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:41.469 12:20:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:41.469 12:20:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:21:41.469 12:20:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.469 12:20:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:41.469 12:20:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.469 12:20:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.469 12:20:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:41.469 12:20:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:41.469 12:20:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:41.469 12:20:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:21:41.469 12:20:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.469 12:20:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.469 12:20:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:41.469 12:20:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.469 12:20:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:41.469 12:20:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:41.469 12:20:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:41.469 12:20:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:21:41.469 12:20:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.469 12:20:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.469 [2024-11-25 12:20:37.504789] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:41.469 12:20:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.469 12:20:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 7502aa39-b5c3-4f27-9c2f-15fdcd66f1b2 '!=' 7502aa39-b5c3-4f27-9c2f-15fdcd66f1b2 ']' 00:21:41.469 12:20:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:21:41.469 12:20:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:41.469 12:20:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:21:41.469 12:20:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:21:41.469 12:20:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.469 12:20:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.469 [2024-11-25 12:20:37.552619] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:21:41.469 12:20:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.469 12:20:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:41.469 12:20:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:41.728 12:20:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:41.728 12:20:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:41.728 12:20:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:41.728 12:20:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:41.728 12:20:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:41.728 12:20:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:41.728 12:20:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:41.728 12:20:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:41.728 12:20:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:41.728 12:20:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:41.728 12:20:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.728 12:20:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.728 12:20:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.728 12:20:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:41.728 "name": "raid_bdev1", 00:21:41.728 "uuid": "7502aa39-b5c3-4f27-9c2f-15fdcd66f1b2", 00:21:41.728 "strip_size_kb": 64, 00:21:41.728 "state": "online", 00:21:41.728 "raid_level": "raid5f", 00:21:41.728 "superblock": true, 00:21:41.728 "num_base_bdevs": 4, 00:21:41.728 "num_base_bdevs_discovered": 3, 00:21:41.729 "num_base_bdevs_operational": 3, 00:21:41.729 "base_bdevs_list": [ 00:21:41.729 { 00:21:41.729 "name": null, 00:21:41.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:41.729 "is_configured": false, 00:21:41.729 "data_offset": 0, 00:21:41.729 "data_size": 63488 00:21:41.729 }, 00:21:41.729 { 00:21:41.729 "name": "pt2", 00:21:41.729 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:41.729 "is_configured": true, 00:21:41.729 "data_offset": 2048, 00:21:41.729 "data_size": 63488 00:21:41.729 }, 00:21:41.729 { 00:21:41.729 "name": "pt3", 00:21:41.729 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:41.729 "is_configured": true, 00:21:41.729 "data_offset": 2048, 00:21:41.729 "data_size": 63488 00:21:41.729 }, 00:21:41.729 { 00:21:41.729 "name": "pt4", 00:21:41.729 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:41.729 "is_configured": true, 00:21:41.729 "data_offset": 2048, 00:21:41.729 "data_size": 63488 00:21:41.729 } 00:21:41.729 ] 00:21:41.729 }' 00:21:41.729 12:20:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:41.729 12:20:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:42.296 12:20:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:42.296 12:20:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.296 12:20:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:42.296 [2024-11-25 12:20:38.116837] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:42.296 [2024-11-25 12:20:38.116908] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:42.296 [2024-11-25 12:20:38.117024] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:42.296 [2024-11-25 12:20:38.117128] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:42.296 [2024-11-25 12:20:38.117154] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:21:42.296 12:20:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.296 12:20:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:42.296 12:20:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.296 12:20:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:21:42.296 12:20:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:42.296 12:20:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.296 12:20:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:21:42.296 12:20:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:21:42.296 12:20:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:21:42.297 12:20:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:42.297 12:20:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:21:42.297 12:20:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.297 12:20:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:42.297 12:20:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.297 12:20:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:21:42.297 12:20:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:42.297 12:20:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:21:42.297 12:20:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.297 12:20:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:42.297 12:20:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.297 12:20:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:21:42.297 12:20:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:42.297 12:20:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:21:42.297 12:20:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.297 12:20:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:42.297 12:20:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.297 12:20:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:21:42.297 12:20:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:42.297 12:20:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:21:42.297 12:20:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:21:42.297 12:20:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:42.297 12:20:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.297 12:20:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:42.297 [2024-11-25 12:20:38.208794] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:42.297 [2024-11-25 12:20:38.208874] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:42.297 [2024-11-25 12:20:38.208903] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:21:42.297 [2024-11-25 12:20:38.208919] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:42.297 [2024-11-25 12:20:38.211852] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:42.297 [2024-11-25 12:20:38.211908] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:42.297 [2024-11-25 12:20:38.212042] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:42.297 [2024-11-25 12:20:38.212101] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:42.297 pt2 00:21:42.297 12:20:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.297 12:20:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:21:42.297 12:20:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:42.297 12:20:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:42.297 12:20:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:42.297 12:20:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:42.297 12:20:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:42.297 12:20:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:42.297 12:20:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:42.297 12:20:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:42.297 12:20:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:42.297 12:20:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:42.297 12:20:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.297 12:20:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:42.297 12:20:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:42.297 12:20:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.297 12:20:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:42.297 "name": "raid_bdev1", 00:21:42.297 "uuid": "7502aa39-b5c3-4f27-9c2f-15fdcd66f1b2", 00:21:42.297 "strip_size_kb": 64, 00:21:42.297 "state": "configuring", 00:21:42.297 "raid_level": "raid5f", 00:21:42.297 "superblock": true, 00:21:42.297 "num_base_bdevs": 4, 00:21:42.297 "num_base_bdevs_discovered": 1, 00:21:42.297 "num_base_bdevs_operational": 3, 00:21:42.297 "base_bdevs_list": [ 00:21:42.297 { 00:21:42.297 "name": null, 00:21:42.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:42.297 "is_configured": false, 00:21:42.297 "data_offset": 2048, 00:21:42.297 "data_size": 63488 00:21:42.297 }, 00:21:42.297 { 00:21:42.297 "name": "pt2", 00:21:42.297 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:42.297 "is_configured": true, 00:21:42.297 "data_offset": 2048, 00:21:42.297 "data_size": 63488 00:21:42.297 }, 00:21:42.297 { 00:21:42.297 "name": null, 00:21:42.297 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:42.297 "is_configured": false, 00:21:42.297 "data_offset": 2048, 00:21:42.297 "data_size": 63488 00:21:42.297 }, 00:21:42.297 { 00:21:42.297 "name": null, 00:21:42.297 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:42.297 "is_configured": false, 00:21:42.297 "data_offset": 2048, 00:21:42.297 "data_size": 63488 00:21:42.297 } 00:21:42.297 ] 00:21:42.297 }' 00:21:42.297 12:20:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:42.297 12:20:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:42.865 12:20:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:21:42.865 12:20:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:21:42.865 12:20:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:42.865 12:20:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.865 12:20:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:42.865 [2024-11-25 12:20:38.757032] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:42.865 [2024-11-25 12:20:38.757106] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:42.865 [2024-11-25 12:20:38.757140] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:21:42.865 [2024-11-25 12:20:38.757155] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:42.865 [2024-11-25 12:20:38.757774] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:42.865 [2024-11-25 12:20:38.757832] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:42.865 [2024-11-25 12:20:38.757939] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:42.865 [2024-11-25 12:20:38.757978] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:42.865 pt3 00:21:42.865 12:20:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.865 12:20:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:21:42.865 12:20:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:42.865 12:20:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:42.865 12:20:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:42.865 12:20:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:42.865 12:20:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:42.865 12:20:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:42.865 12:20:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:42.865 12:20:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:42.865 12:20:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:42.865 12:20:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:42.865 12:20:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:42.865 12:20:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.865 12:20:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:42.865 12:20:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.865 12:20:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:42.865 "name": "raid_bdev1", 00:21:42.865 "uuid": "7502aa39-b5c3-4f27-9c2f-15fdcd66f1b2", 00:21:42.865 "strip_size_kb": 64, 00:21:42.865 "state": "configuring", 00:21:42.865 "raid_level": "raid5f", 00:21:42.865 "superblock": true, 00:21:42.865 "num_base_bdevs": 4, 00:21:42.865 "num_base_bdevs_discovered": 2, 00:21:42.865 "num_base_bdevs_operational": 3, 00:21:42.865 "base_bdevs_list": [ 00:21:42.865 { 00:21:42.865 "name": null, 00:21:42.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:42.865 "is_configured": false, 00:21:42.865 "data_offset": 2048, 00:21:42.865 "data_size": 63488 00:21:42.865 }, 00:21:42.865 { 00:21:42.865 "name": "pt2", 00:21:42.865 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:42.865 "is_configured": true, 00:21:42.865 "data_offset": 2048, 00:21:42.865 "data_size": 63488 00:21:42.865 }, 00:21:42.865 { 00:21:42.865 "name": "pt3", 00:21:42.865 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:42.865 "is_configured": true, 00:21:42.865 "data_offset": 2048, 00:21:42.865 "data_size": 63488 00:21:42.865 }, 00:21:42.865 { 00:21:42.865 "name": null, 00:21:42.865 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:42.865 "is_configured": false, 00:21:42.865 "data_offset": 2048, 00:21:42.865 "data_size": 63488 00:21:42.865 } 00:21:42.865 ] 00:21:42.865 }' 00:21:42.865 12:20:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:42.865 12:20:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:43.435 12:20:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:21:43.436 12:20:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:21:43.436 12:20:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:21:43.436 12:20:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:21:43.436 12:20:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.436 12:20:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:43.436 [2024-11-25 12:20:39.281180] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:21:43.436 [2024-11-25 12:20:39.281269] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:43.436 [2024-11-25 12:20:39.281304] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:21:43.436 [2024-11-25 12:20:39.281320] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:43.436 [2024-11-25 12:20:39.281901] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:43.436 [2024-11-25 12:20:39.281937] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:21:43.436 [2024-11-25 12:20:39.282045] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:21:43.436 [2024-11-25 12:20:39.282077] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:21:43.436 [2024-11-25 12:20:39.282244] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:43.436 [2024-11-25 12:20:39.282269] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:43.436 [2024-11-25 12:20:39.282597] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:21:43.437 [2024-11-25 12:20:39.289110] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:43.437 [2024-11-25 12:20:39.289160] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:21:43.437 [2024-11-25 12:20:39.289501] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:43.437 pt4 00:21:43.437 12:20:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.437 12:20:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:43.437 12:20:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:43.437 12:20:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:43.437 12:20:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:43.437 12:20:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:43.437 12:20:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:43.437 12:20:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:43.437 12:20:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:43.437 12:20:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:43.437 12:20:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:43.437 12:20:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:43.437 12:20:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:43.437 12:20:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.437 12:20:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:43.438 12:20:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.438 12:20:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:43.438 "name": "raid_bdev1", 00:21:43.438 "uuid": "7502aa39-b5c3-4f27-9c2f-15fdcd66f1b2", 00:21:43.438 "strip_size_kb": 64, 00:21:43.438 "state": "online", 00:21:43.438 "raid_level": "raid5f", 00:21:43.438 "superblock": true, 00:21:43.438 "num_base_bdevs": 4, 00:21:43.438 "num_base_bdevs_discovered": 3, 00:21:43.438 "num_base_bdevs_operational": 3, 00:21:43.438 "base_bdevs_list": [ 00:21:43.438 { 00:21:43.438 "name": null, 00:21:43.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:43.438 "is_configured": false, 00:21:43.438 "data_offset": 2048, 00:21:43.438 "data_size": 63488 00:21:43.438 }, 00:21:43.438 { 00:21:43.438 "name": "pt2", 00:21:43.438 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:43.438 "is_configured": true, 00:21:43.438 "data_offset": 2048, 00:21:43.438 "data_size": 63488 00:21:43.438 }, 00:21:43.438 { 00:21:43.438 "name": "pt3", 00:21:43.438 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:43.438 "is_configured": true, 00:21:43.438 "data_offset": 2048, 00:21:43.438 "data_size": 63488 00:21:43.438 }, 00:21:43.438 { 00:21:43.438 "name": "pt4", 00:21:43.438 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:43.438 "is_configured": true, 00:21:43.438 "data_offset": 2048, 00:21:43.438 "data_size": 63488 00:21:43.438 } 00:21:43.438 ] 00:21:43.438 }' 00:21:43.438 12:20:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:43.438 12:20:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:43.707 12:20:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:43.707 12:20:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.707 12:20:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:43.967 [2024-11-25 12:20:39.797009] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:43.967 [2024-11-25 12:20:39.797045] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:43.967 [2024-11-25 12:20:39.797143] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:43.967 [2024-11-25 12:20:39.797237] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:43.967 [2024-11-25 12:20:39.797258] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:21:43.967 12:20:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.967 12:20:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:43.967 12:20:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.967 12:20:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:43.967 12:20:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:21:43.967 12:20:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.967 12:20:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:21:43.967 12:20:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:21:43.967 12:20:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:21:43.967 12:20:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:21:43.967 12:20:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:21:43.968 12:20:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.968 12:20:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:43.968 12:20:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.968 12:20:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:43.968 12:20:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.968 12:20:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:43.968 [2024-11-25 12:20:39.865010] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:43.968 [2024-11-25 12:20:39.865096] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:43.968 [2024-11-25 12:20:39.865131] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:21:43.968 [2024-11-25 12:20:39.865156] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:43.968 [2024-11-25 12:20:39.868067] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:43.968 [2024-11-25 12:20:39.868114] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:43.968 [2024-11-25 12:20:39.868210] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:43.968 [2024-11-25 12:20:39.868278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:43.968 [2024-11-25 12:20:39.868467] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:21:43.968 [2024-11-25 12:20:39.868502] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:43.968 [2024-11-25 12:20:39.868523] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:21:43.968 [2024-11-25 12:20:39.868594] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:43.968 [2024-11-25 12:20:39.868726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:43.968 pt1 00:21:43.968 12:20:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.968 12:20:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:21:43.968 12:20:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:21:43.968 12:20:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:43.968 12:20:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:43.968 12:20:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:43.968 12:20:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:43.968 12:20:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:43.968 12:20:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:43.968 12:20:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:43.968 12:20:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:43.968 12:20:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:43.968 12:20:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:43.968 12:20:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.968 12:20:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:43.968 12:20:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:43.968 12:20:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.968 12:20:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:43.968 "name": "raid_bdev1", 00:21:43.968 "uuid": "7502aa39-b5c3-4f27-9c2f-15fdcd66f1b2", 00:21:43.968 "strip_size_kb": 64, 00:21:43.968 "state": "configuring", 00:21:43.968 "raid_level": "raid5f", 00:21:43.968 "superblock": true, 00:21:43.968 "num_base_bdevs": 4, 00:21:43.968 "num_base_bdevs_discovered": 2, 00:21:43.968 "num_base_bdevs_operational": 3, 00:21:43.968 "base_bdevs_list": [ 00:21:43.968 { 00:21:43.968 "name": null, 00:21:43.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:43.968 "is_configured": false, 00:21:43.968 "data_offset": 2048, 00:21:43.968 "data_size": 63488 00:21:43.968 }, 00:21:43.968 { 00:21:43.968 "name": "pt2", 00:21:43.968 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:43.968 "is_configured": true, 00:21:43.968 "data_offset": 2048, 00:21:43.968 "data_size": 63488 00:21:43.968 }, 00:21:43.968 { 00:21:43.968 "name": "pt3", 00:21:43.968 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:43.968 "is_configured": true, 00:21:43.968 "data_offset": 2048, 00:21:43.968 "data_size": 63488 00:21:43.968 }, 00:21:43.968 { 00:21:43.968 "name": null, 00:21:43.968 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:43.968 "is_configured": false, 00:21:43.968 "data_offset": 2048, 00:21:43.968 "data_size": 63488 00:21:43.968 } 00:21:43.968 ] 00:21:43.968 }' 00:21:43.968 12:20:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:43.968 12:20:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:44.602 12:20:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:44.602 12:20:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:21:44.602 12:20:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.602 12:20:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:44.602 12:20:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.602 12:20:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:21:44.602 12:20:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:21:44.602 12:20:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.602 12:20:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:44.602 [2024-11-25 12:20:40.425200] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:21:44.602 [2024-11-25 12:20:40.425321] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:44.602 [2024-11-25 12:20:40.425367] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:21:44.602 [2024-11-25 12:20:40.425398] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:44.602 [2024-11-25 12:20:40.425962] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:44.602 [2024-11-25 12:20:40.426012] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:21:44.602 [2024-11-25 12:20:40.426116] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:21:44.602 [2024-11-25 12:20:40.426155] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:21:44.602 [2024-11-25 12:20:40.426354] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:21:44.602 [2024-11-25 12:20:40.426376] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:44.602 [2024-11-25 12:20:40.426687] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:21:44.602 [2024-11-25 12:20:40.433167] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:21:44.602 [2024-11-25 12:20:40.433202] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:21:44.602 [2024-11-25 12:20:40.433535] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:44.602 pt4 00:21:44.602 12:20:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.602 12:20:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:44.602 12:20:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:44.602 12:20:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:44.602 12:20:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:44.602 12:20:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:44.602 12:20:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:44.602 12:20:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:44.602 12:20:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:44.602 12:20:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:44.602 12:20:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:44.602 12:20:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:44.602 12:20:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:44.602 12:20:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.602 12:20:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:44.602 12:20:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.602 12:20:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:44.602 "name": "raid_bdev1", 00:21:44.602 "uuid": "7502aa39-b5c3-4f27-9c2f-15fdcd66f1b2", 00:21:44.602 "strip_size_kb": 64, 00:21:44.602 "state": "online", 00:21:44.602 "raid_level": "raid5f", 00:21:44.602 "superblock": true, 00:21:44.602 "num_base_bdevs": 4, 00:21:44.602 "num_base_bdevs_discovered": 3, 00:21:44.602 "num_base_bdevs_operational": 3, 00:21:44.602 "base_bdevs_list": [ 00:21:44.602 { 00:21:44.602 "name": null, 00:21:44.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:44.602 "is_configured": false, 00:21:44.602 "data_offset": 2048, 00:21:44.603 "data_size": 63488 00:21:44.603 }, 00:21:44.603 { 00:21:44.603 "name": "pt2", 00:21:44.603 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:44.603 "is_configured": true, 00:21:44.603 "data_offset": 2048, 00:21:44.603 "data_size": 63488 00:21:44.603 }, 00:21:44.603 { 00:21:44.603 "name": "pt3", 00:21:44.603 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:44.603 "is_configured": true, 00:21:44.603 "data_offset": 2048, 00:21:44.603 "data_size": 63488 00:21:44.603 }, 00:21:44.603 { 00:21:44.603 "name": "pt4", 00:21:44.603 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:44.603 "is_configured": true, 00:21:44.603 "data_offset": 2048, 00:21:44.603 "data_size": 63488 00:21:44.603 } 00:21:44.603 ] 00:21:44.603 }' 00:21:44.603 12:20:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:44.603 12:20:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:44.862 12:20:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:21:44.862 12:20:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:44.862 12:20:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.862 12:20:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:45.121 12:20:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.121 12:20:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:21:45.121 12:20:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:45.121 12:20:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.121 12:20:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:45.121 12:20:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:21:45.121 [2024-11-25 12:20:40.997166] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:45.121 12:20:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.121 12:20:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 7502aa39-b5c3-4f27-9c2f-15fdcd66f1b2 '!=' 7502aa39-b5c3-4f27-9c2f-15fdcd66f1b2 ']' 00:21:45.121 12:20:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84517 00:21:45.121 12:20:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 84517 ']' 00:21:45.121 12:20:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 84517 00:21:45.121 12:20:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:21:45.121 12:20:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:45.121 12:20:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84517 00:21:45.121 killing process with pid 84517 00:21:45.121 12:20:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:45.121 12:20:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:45.121 12:20:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84517' 00:21:45.121 12:20:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 84517 00:21:45.121 [2024-11-25 12:20:41.067049] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:45.121 12:20:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 84517 00:21:45.121 [2024-11-25 12:20:41.067166] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:45.121 [2024-11-25 12:20:41.067268] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:45.121 [2024-11-25 12:20:41.067289] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:21:45.380 [2024-11-25 12:20:41.420745] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:46.756 12:20:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:21:46.756 00:21:46.756 real 0m9.497s 00:21:46.756 user 0m15.650s 00:21:46.756 sys 0m1.369s 00:21:46.756 12:20:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:46.756 ************************************ 00:21:46.756 END TEST raid5f_superblock_test 00:21:46.756 ************************************ 00:21:46.756 12:20:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:46.756 12:20:42 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:21:46.756 12:20:42 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:21:46.756 12:20:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:21:46.756 12:20:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:46.756 12:20:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:46.756 ************************************ 00:21:46.756 START TEST raid5f_rebuild_test 00:21:46.756 ************************************ 00:21:46.756 12:20:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:21:46.756 12:20:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:21:46.756 12:20:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:21:46.756 12:20:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:21:46.756 12:20:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:21:46.756 12:20:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:21:46.756 12:20:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:21:46.756 12:20:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:46.756 12:20:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:21:46.756 12:20:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:46.756 12:20:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:46.756 12:20:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:21:46.756 12:20:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:46.756 12:20:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:46.756 12:20:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:21:46.756 12:20:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:46.756 12:20:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:46.756 12:20:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:21:46.756 12:20:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:46.756 12:20:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:46.756 12:20:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:46.756 12:20:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:21:46.756 12:20:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:21:46.756 12:20:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:21:46.756 12:20:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:21:46.756 12:20:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:21:46.756 12:20:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:21:46.756 12:20:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:21:46.756 12:20:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:21:46.756 12:20:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:21:46.756 12:20:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:21:46.756 12:20:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:21:46.756 12:20:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=85008 00:21:46.756 12:20:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 85008 00:21:46.756 12:20:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 85008 ']' 00:21:46.756 12:20:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:46.756 12:20:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:46.756 12:20:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:46.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:46.756 12:20:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:46.756 12:20:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:46.756 12:20:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:46.756 [2024-11-25 12:20:42.607283] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:21:46.756 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:46.756 Zero copy mechanism will not be used. 00:21:46.756 [2024-11-25 12:20:42.607483] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85008 ] 00:21:46.756 [2024-11-25 12:20:42.796163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.015 [2024-11-25 12:20:42.952642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:47.273 [2024-11-25 12:20:43.171784] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:47.273 [2024-11-25 12:20:43.171827] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:47.531 12:20:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:47.531 12:20:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:21:47.531 12:20:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:47.531 12:20:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:47.531 12:20:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.531 12:20:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.791 BaseBdev1_malloc 00:21:47.791 12:20:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.791 12:20:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:47.791 12:20:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.791 12:20:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.791 [2024-11-25 12:20:43.627289] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:47.791 [2024-11-25 12:20:43.627386] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:47.791 [2024-11-25 12:20:43.627429] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:47.791 [2024-11-25 12:20:43.627448] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:47.791 [2024-11-25 12:20:43.630247] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:47.791 [2024-11-25 12:20:43.630300] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:47.791 BaseBdev1 00:21:47.791 12:20:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.791 12:20:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:47.791 12:20:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:47.791 12:20:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.791 12:20:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.791 BaseBdev2_malloc 00:21:47.791 12:20:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.791 12:20:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:47.791 12:20:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.791 12:20:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.791 [2024-11-25 12:20:43.675579] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:47.791 [2024-11-25 12:20:43.675651] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:47.791 [2024-11-25 12:20:43.675680] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:47.791 [2024-11-25 12:20:43.675702] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:47.791 [2024-11-25 12:20:43.678452] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:47.791 [2024-11-25 12:20:43.678502] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:47.791 BaseBdev2 00:21:47.791 12:20:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.791 12:20:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:47.791 12:20:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:47.791 12:20:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.791 12:20:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.791 BaseBdev3_malloc 00:21:47.791 12:20:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.791 12:20:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:21:47.791 12:20:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.791 12:20:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.791 [2024-11-25 12:20:43.741112] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:21:47.791 [2024-11-25 12:20:43.741209] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:47.791 [2024-11-25 12:20:43.741267] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:47.791 [2024-11-25 12:20:43.741300] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:47.791 [2024-11-25 12:20:43.745181] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:47.791 [2024-11-25 12:20:43.745256] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:47.791 BaseBdev3 00:21:47.791 12:20:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.791 12:20:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:47.791 12:20:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:21:47.791 12:20:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.791 12:20:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.791 BaseBdev4_malloc 00:21:47.791 12:20:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.791 12:20:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:21:47.791 12:20:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.792 12:20:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.792 [2024-11-25 12:20:43.794025] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:21:47.792 [2024-11-25 12:20:43.794094] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:47.792 [2024-11-25 12:20:43.794124] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:21:47.792 [2024-11-25 12:20:43.794142] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:47.792 [2024-11-25 12:20:43.796880] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:47.792 [2024-11-25 12:20:43.796948] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:47.792 BaseBdev4 00:21:47.792 12:20:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.792 12:20:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:21:47.792 12:20:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.792 12:20:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.792 spare_malloc 00:21:47.792 12:20:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.792 12:20:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:47.792 12:20:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.792 12:20:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.792 spare_delay 00:21:47.792 12:20:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.792 12:20:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:47.792 12:20:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.792 12:20:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.792 [2024-11-25 12:20:43.854011] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:47.792 [2024-11-25 12:20:43.854100] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:47.792 [2024-11-25 12:20:43.854130] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:21:47.792 [2024-11-25 12:20:43.854149] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:47.792 [2024-11-25 12:20:43.856934] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:47.792 [2024-11-25 12:20:43.856986] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:47.792 spare 00:21:47.792 12:20:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.792 12:20:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:21:47.792 12:20:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.792 12:20:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.792 [2024-11-25 12:20:43.862071] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:47.792 [2024-11-25 12:20:43.864519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:47.792 [2024-11-25 12:20:43.864611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:47.792 [2024-11-25 12:20:43.864692] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:47.792 [2024-11-25 12:20:43.864821] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:47.792 [2024-11-25 12:20:43.864857] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:21:47.792 [2024-11-25 12:20:43.865163] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:47.792 [2024-11-25 12:20:43.871925] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:47.792 [2024-11-25 12:20:43.871958] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:47.792 [2024-11-25 12:20:43.872228] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:47.792 12:20:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.792 12:20:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:21:47.792 12:20:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:47.792 12:20:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:47.792 12:20:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:47.792 12:20:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:47.792 12:20:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:47.792 12:20:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:47.792 12:20:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:47.792 12:20:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:47.792 12:20:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:47.792 12:20:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:47.792 12:20:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.792 12:20:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:47.792 12:20:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.051 12:20:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.051 12:20:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:48.051 "name": "raid_bdev1", 00:21:48.051 "uuid": "e2680e6c-5d31-4b5f-913a-58601bd5ca2f", 00:21:48.051 "strip_size_kb": 64, 00:21:48.051 "state": "online", 00:21:48.051 "raid_level": "raid5f", 00:21:48.051 "superblock": false, 00:21:48.051 "num_base_bdevs": 4, 00:21:48.051 "num_base_bdevs_discovered": 4, 00:21:48.051 "num_base_bdevs_operational": 4, 00:21:48.051 "base_bdevs_list": [ 00:21:48.051 { 00:21:48.051 "name": "BaseBdev1", 00:21:48.051 "uuid": "0e1838f8-7c10-534b-97a4-895cf9934081", 00:21:48.051 "is_configured": true, 00:21:48.051 "data_offset": 0, 00:21:48.051 "data_size": 65536 00:21:48.051 }, 00:21:48.051 { 00:21:48.051 "name": "BaseBdev2", 00:21:48.051 "uuid": "6d144668-16c7-52eb-bab1-385e8e0f3e00", 00:21:48.051 "is_configured": true, 00:21:48.051 "data_offset": 0, 00:21:48.051 "data_size": 65536 00:21:48.051 }, 00:21:48.051 { 00:21:48.051 "name": "BaseBdev3", 00:21:48.051 "uuid": "95bdea9a-4ae3-5f1a-b9e6-9c2990d15699", 00:21:48.051 "is_configured": true, 00:21:48.051 "data_offset": 0, 00:21:48.051 "data_size": 65536 00:21:48.051 }, 00:21:48.051 { 00:21:48.051 "name": "BaseBdev4", 00:21:48.051 "uuid": "7d869b8d-b8b8-544e-a67e-263a810ef734", 00:21:48.051 "is_configured": true, 00:21:48.051 "data_offset": 0, 00:21:48.051 "data_size": 65536 00:21:48.051 } 00:21:48.051 ] 00:21:48.051 }' 00:21:48.051 12:20:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:48.051 12:20:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.315 12:20:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:21:48.315 12:20:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:48.315 12:20:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.315 12:20:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.315 [2024-11-25 12:20:44.395959] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:48.575 12:20:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.575 12:20:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:21:48.575 12:20:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:48.575 12:20:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.575 12:20:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:48.575 12:20:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.575 12:20:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.575 12:20:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:21:48.575 12:20:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:21:48.575 12:20:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:21:48.575 12:20:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:21:48.575 12:20:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:21:48.575 12:20:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:48.575 12:20:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:21:48.575 12:20:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:48.575 12:20:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:48.575 12:20:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:48.575 12:20:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:21:48.575 12:20:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:48.575 12:20:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:48.575 12:20:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:48.834 [2024-11-25 12:20:44.751859] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:21:48.834 /dev/nbd0 00:21:48.834 12:20:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:48.834 12:20:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:48.834 12:20:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:48.834 12:20:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:21:48.834 12:20:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:48.834 12:20:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:48.834 12:20:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:48.834 12:20:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:21:48.834 12:20:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:48.834 12:20:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:48.834 12:20:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:48.834 1+0 records in 00:21:48.834 1+0 records out 00:21:48.834 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000372255 s, 11.0 MB/s 00:21:48.834 12:20:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:48.834 12:20:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:21:48.835 12:20:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:48.835 12:20:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:48.835 12:20:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:21:48.835 12:20:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:48.835 12:20:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:48.835 12:20:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:21:48.835 12:20:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:21:48.835 12:20:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:21:48.835 12:20:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:21:49.403 512+0 records in 00:21:49.403 512+0 records out 00:21:49.403 100663296 bytes (101 MB, 96 MiB) copied, 0.623977 s, 161 MB/s 00:21:49.403 12:20:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:21:49.403 12:20:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:49.403 12:20:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:49.403 12:20:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:49.403 12:20:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:21:49.403 12:20:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:49.403 12:20:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:49.972 [2024-11-25 12:20:45.752010] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:49.972 12:20:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:49.972 12:20:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:49.972 12:20:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:49.972 12:20:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:49.972 12:20:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:49.972 12:20:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:49.972 12:20:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:21:49.972 12:20:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:21:49.972 12:20:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:21:49.972 12:20:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.972 12:20:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:49.972 [2024-11-25 12:20:45.782435] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:49.972 12:20:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.972 12:20:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:49.972 12:20:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:49.972 12:20:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:49.972 12:20:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:49.972 12:20:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:49.972 12:20:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:49.972 12:20:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:49.972 12:20:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:49.972 12:20:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:49.972 12:20:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:49.972 12:20:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:49.972 12:20:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:49.972 12:20:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.972 12:20:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:49.972 12:20:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.972 12:20:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:49.972 "name": "raid_bdev1", 00:21:49.972 "uuid": "e2680e6c-5d31-4b5f-913a-58601bd5ca2f", 00:21:49.972 "strip_size_kb": 64, 00:21:49.972 "state": "online", 00:21:49.972 "raid_level": "raid5f", 00:21:49.972 "superblock": false, 00:21:49.972 "num_base_bdevs": 4, 00:21:49.972 "num_base_bdevs_discovered": 3, 00:21:49.972 "num_base_bdevs_operational": 3, 00:21:49.972 "base_bdevs_list": [ 00:21:49.972 { 00:21:49.972 "name": null, 00:21:49.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:49.972 "is_configured": false, 00:21:49.972 "data_offset": 0, 00:21:49.972 "data_size": 65536 00:21:49.972 }, 00:21:49.972 { 00:21:49.972 "name": "BaseBdev2", 00:21:49.972 "uuid": "6d144668-16c7-52eb-bab1-385e8e0f3e00", 00:21:49.972 "is_configured": true, 00:21:49.972 "data_offset": 0, 00:21:49.972 "data_size": 65536 00:21:49.972 }, 00:21:49.972 { 00:21:49.972 "name": "BaseBdev3", 00:21:49.972 "uuid": "95bdea9a-4ae3-5f1a-b9e6-9c2990d15699", 00:21:49.972 "is_configured": true, 00:21:49.972 "data_offset": 0, 00:21:49.972 "data_size": 65536 00:21:49.972 }, 00:21:49.972 { 00:21:49.972 "name": "BaseBdev4", 00:21:49.972 "uuid": "7d869b8d-b8b8-544e-a67e-263a810ef734", 00:21:49.972 "is_configured": true, 00:21:49.972 "data_offset": 0, 00:21:49.972 "data_size": 65536 00:21:49.972 } 00:21:49.972 ] 00:21:49.972 }' 00:21:49.972 12:20:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:49.972 12:20:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.231 12:20:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:50.231 12:20:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.231 12:20:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.231 [2024-11-25 12:20:46.298638] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:50.231 [2024-11-25 12:20:46.313021] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:21:50.231 12:20:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.231 12:20:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:21:50.491 [2024-11-25 12:20:46.322279] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:51.430 12:20:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:51.430 12:20:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:51.430 12:20:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:51.430 12:20:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:51.430 12:20:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:51.430 12:20:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:51.430 12:20:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:51.430 12:20:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.430 12:20:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:51.430 12:20:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.430 12:20:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:51.430 "name": "raid_bdev1", 00:21:51.430 "uuid": "e2680e6c-5d31-4b5f-913a-58601bd5ca2f", 00:21:51.430 "strip_size_kb": 64, 00:21:51.430 "state": "online", 00:21:51.430 "raid_level": "raid5f", 00:21:51.430 "superblock": false, 00:21:51.430 "num_base_bdevs": 4, 00:21:51.430 "num_base_bdevs_discovered": 4, 00:21:51.430 "num_base_bdevs_operational": 4, 00:21:51.430 "process": { 00:21:51.430 "type": "rebuild", 00:21:51.430 "target": "spare", 00:21:51.430 "progress": { 00:21:51.430 "blocks": 17280, 00:21:51.430 "percent": 8 00:21:51.430 } 00:21:51.430 }, 00:21:51.430 "base_bdevs_list": [ 00:21:51.430 { 00:21:51.430 "name": "spare", 00:21:51.430 "uuid": "edd03535-3c0d-5b6f-a49a-6ae031d3b0eb", 00:21:51.430 "is_configured": true, 00:21:51.430 "data_offset": 0, 00:21:51.430 "data_size": 65536 00:21:51.430 }, 00:21:51.430 { 00:21:51.430 "name": "BaseBdev2", 00:21:51.430 "uuid": "6d144668-16c7-52eb-bab1-385e8e0f3e00", 00:21:51.430 "is_configured": true, 00:21:51.430 "data_offset": 0, 00:21:51.430 "data_size": 65536 00:21:51.430 }, 00:21:51.430 { 00:21:51.430 "name": "BaseBdev3", 00:21:51.430 "uuid": "95bdea9a-4ae3-5f1a-b9e6-9c2990d15699", 00:21:51.430 "is_configured": true, 00:21:51.430 "data_offset": 0, 00:21:51.430 "data_size": 65536 00:21:51.430 }, 00:21:51.430 { 00:21:51.430 "name": "BaseBdev4", 00:21:51.430 "uuid": "7d869b8d-b8b8-544e-a67e-263a810ef734", 00:21:51.430 "is_configured": true, 00:21:51.430 "data_offset": 0, 00:21:51.430 "data_size": 65536 00:21:51.430 } 00:21:51.430 ] 00:21:51.430 }' 00:21:51.430 12:20:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:51.430 12:20:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:51.430 12:20:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:51.430 12:20:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:51.430 12:20:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:51.430 12:20:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.430 12:20:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:51.430 [2024-11-25 12:20:47.479775] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:51.690 [2024-11-25 12:20:47.533739] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:51.690 [2024-11-25 12:20:47.533870] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:51.690 [2024-11-25 12:20:47.533897] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:51.690 [2024-11-25 12:20:47.533913] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:51.690 12:20:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.690 12:20:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:51.690 12:20:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:51.690 12:20:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:51.690 12:20:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:51.690 12:20:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:51.690 12:20:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:51.690 12:20:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:51.690 12:20:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:51.690 12:20:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:51.690 12:20:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:51.690 12:20:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:51.690 12:20:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.690 12:20:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:51.690 12:20:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:51.690 12:20:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.690 12:20:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:51.690 "name": "raid_bdev1", 00:21:51.690 "uuid": "e2680e6c-5d31-4b5f-913a-58601bd5ca2f", 00:21:51.690 "strip_size_kb": 64, 00:21:51.690 "state": "online", 00:21:51.690 "raid_level": "raid5f", 00:21:51.690 "superblock": false, 00:21:51.690 "num_base_bdevs": 4, 00:21:51.690 "num_base_bdevs_discovered": 3, 00:21:51.690 "num_base_bdevs_operational": 3, 00:21:51.690 "base_bdevs_list": [ 00:21:51.690 { 00:21:51.690 "name": null, 00:21:51.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:51.690 "is_configured": false, 00:21:51.690 "data_offset": 0, 00:21:51.690 "data_size": 65536 00:21:51.690 }, 00:21:51.690 { 00:21:51.690 "name": "BaseBdev2", 00:21:51.690 "uuid": "6d144668-16c7-52eb-bab1-385e8e0f3e00", 00:21:51.690 "is_configured": true, 00:21:51.690 "data_offset": 0, 00:21:51.690 "data_size": 65536 00:21:51.690 }, 00:21:51.690 { 00:21:51.690 "name": "BaseBdev3", 00:21:51.690 "uuid": "95bdea9a-4ae3-5f1a-b9e6-9c2990d15699", 00:21:51.690 "is_configured": true, 00:21:51.690 "data_offset": 0, 00:21:51.690 "data_size": 65536 00:21:51.690 }, 00:21:51.690 { 00:21:51.690 "name": "BaseBdev4", 00:21:51.690 "uuid": "7d869b8d-b8b8-544e-a67e-263a810ef734", 00:21:51.690 "is_configured": true, 00:21:51.690 "data_offset": 0, 00:21:51.690 "data_size": 65536 00:21:51.690 } 00:21:51.690 ] 00:21:51.690 }' 00:21:51.690 12:20:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:51.690 12:20:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:52.257 12:20:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:52.257 12:20:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:52.257 12:20:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:52.257 12:20:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:52.257 12:20:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:52.257 12:20:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:52.257 12:20:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:52.257 12:20:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.257 12:20:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:52.257 12:20:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.257 12:20:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:52.257 "name": "raid_bdev1", 00:21:52.257 "uuid": "e2680e6c-5d31-4b5f-913a-58601bd5ca2f", 00:21:52.257 "strip_size_kb": 64, 00:21:52.257 "state": "online", 00:21:52.257 "raid_level": "raid5f", 00:21:52.257 "superblock": false, 00:21:52.257 "num_base_bdevs": 4, 00:21:52.257 "num_base_bdevs_discovered": 3, 00:21:52.257 "num_base_bdevs_operational": 3, 00:21:52.257 "base_bdevs_list": [ 00:21:52.257 { 00:21:52.257 "name": null, 00:21:52.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:52.257 "is_configured": false, 00:21:52.257 "data_offset": 0, 00:21:52.257 "data_size": 65536 00:21:52.257 }, 00:21:52.257 { 00:21:52.257 "name": "BaseBdev2", 00:21:52.257 "uuid": "6d144668-16c7-52eb-bab1-385e8e0f3e00", 00:21:52.257 "is_configured": true, 00:21:52.257 "data_offset": 0, 00:21:52.257 "data_size": 65536 00:21:52.257 }, 00:21:52.257 { 00:21:52.257 "name": "BaseBdev3", 00:21:52.257 "uuid": "95bdea9a-4ae3-5f1a-b9e6-9c2990d15699", 00:21:52.257 "is_configured": true, 00:21:52.257 "data_offset": 0, 00:21:52.257 "data_size": 65536 00:21:52.257 }, 00:21:52.257 { 00:21:52.257 "name": "BaseBdev4", 00:21:52.257 "uuid": "7d869b8d-b8b8-544e-a67e-263a810ef734", 00:21:52.257 "is_configured": true, 00:21:52.257 "data_offset": 0, 00:21:52.257 "data_size": 65536 00:21:52.257 } 00:21:52.257 ] 00:21:52.257 }' 00:21:52.258 12:20:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:52.258 12:20:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:52.258 12:20:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:52.258 12:20:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:52.258 12:20:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:52.258 12:20:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.258 12:20:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:52.258 [2024-11-25 12:20:48.277253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:52.258 [2024-11-25 12:20:48.290929] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:21:52.258 12:20:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.258 12:20:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:21:52.258 [2024-11-25 12:20:48.299749] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:53.266 12:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:53.266 12:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:53.266 12:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:53.266 12:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:53.266 12:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:53.266 12:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:53.266 12:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:53.266 12:20:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.266 12:20:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.266 12:20:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.266 12:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:53.266 "name": "raid_bdev1", 00:21:53.266 "uuid": "e2680e6c-5d31-4b5f-913a-58601bd5ca2f", 00:21:53.266 "strip_size_kb": 64, 00:21:53.266 "state": "online", 00:21:53.266 "raid_level": "raid5f", 00:21:53.266 "superblock": false, 00:21:53.266 "num_base_bdevs": 4, 00:21:53.266 "num_base_bdevs_discovered": 4, 00:21:53.266 "num_base_bdevs_operational": 4, 00:21:53.266 "process": { 00:21:53.266 "type": "rebuild", 00:21:53.266 "target": "spare", 00:21:53.266 "progress": { 00:21:53.266 "blocks": 17280, 00:21:53.266 "percent": 8 00:21:53.266 } 00:21:53.266 }, 00:21:53.266 "base_bdevs_list": [ 00:21:53.266 { 00:21:53.266 "name": "spare", 00:21:53.266 "uuid": "edd03535-3c0d-5b6f-a49a-6ae031d3b0eb", 00:21:53.266 "is_configured": true, 00:21:53.266 "data_offset": 0, 00:21:53.266 "data_size": 65536 00:21:53.266 }, 00:21:53.266 { 00:21:53.266 "name": "BaseBdev2", 00:21:53.266 "uuid": "6d144668-16c7-52eb-bab1-385e8e0f3e00", 00:21:53.266 "is_configured": true, 00:21:53.266 "data_offset": 0, 00:21:53.266 "data_size": 65536 00:21:53.266 }, 00:21:53.266 { 00:21:53.266 "name": "BaseBdev3", 00:21:53.266 "uuid": "95bdea9a-4ae3-5f1a-b9e6-9c2990d15699", 00:21:53.266 "is_configured": true, 00:21:53.266 "data_offset": 0, 00:21:53.266 "data_size": 65536 00:21:53.266 }, 00:21:53.266 { 00:21:53.266 "name": "BaseBdev4", 00:21:53.266 "uuid": "7d869b8d-b8b8-544e-a67e-263a810ef734", 00:21:53.266 "is_configured": true, 00:21:53.266 "data_offset": 0, 00:21:53.266 "data_size": 65536 00:21:53.266 } 00:21:53.266 ] 00:21:53.266 }' 00:21:53.266 12:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:53.551 12:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:53.551 12:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:53.551 12:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:53.551 12:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:21:53.551 12:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:21:53.551 12:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:21:53.551 12:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=668 00:21:53.551 12:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:53.551 12:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:53.551 12:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:53.551 12:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:53.551 12:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:53.551 12:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:53.551 12:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:53.551 12:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:53.551 12:20:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.551 12:20:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.551 12:20:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.551 12:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:53.551 "name": "raid_bdev1", 00:21:53.551 "uuid": "e2680e6c-5d31-4b5f-913a-58601bd5ca2f", 00:21:53.551 "strip_size_kb": 64, 00:21:53.551 "state": "online", 00:21:53.551 "raid_level": "raid5f", 00:21:53.551 "superblock": false, 00:21:53.551 "num_base_bdevs": 4, 00:21:53.551 "num_base_bdevs_discovered": 4, 00:21:53.551 "num_base_bdevs_operational": 4, 00:21:53.551 "process": { 00:21:53.551 "type": "rebuild", 00:21:53.551 "target": "spare", 00:21:53.551 "progress": { 00:21:53.551 "blocks": 21120, 00:21:53.551 "percent": 10 00:21:53.551 } 00:21:53.551 }, 00:21:53.551 "base_bdevs_list": [ 00:21:53.551 { 00:21:53.551 "name": "spare", 00:21:53.551 "uuid": "edd03535-3c0d-5b6f-a49a-6ae031d3b0eb", 00:21:53.551 "is_configured": true, 00:21:53.551 "data_offset": 0, 00:21:53.551 "data_size": 65536 00:21:53.551 }, 00:21:53.551 { 00:21:53.551 "name": "BaseBdev2", 00:21:53.551 "uuid": "6d144668-16c7-52eb-bab1-385e8e0f3e00", 00:21:53.551 "is_configured": true, 00:21:53.551 "data_offset": 0, 00:21:53.551 "data_size": 65536 00:21:53.551 }, 00:21:53.551 { 00:21:53.551 "name": "BaseBdev3", 00:21:53.551 "uuid": "95bdea9a-4ae3-5f1a-b9e6-9c2990d15699", 00:21:53.551 "is_configured": true, 00:21:53.551 "data_offset": 0, 00:21:53.551 "data_size": 65536 00:21:53.551 }, 00:21:53.551 { 00:21:53.551 "name": "BaseBdev4", 00:21:53.551 "uuid": "7d869b8d-b8b8-544e-a67e-263a810ef734", 00:21:53.551 "is_configured": true, 00:21:53.551 "data_offset": 0, 00:21:53.551 "data_size": 65536 00:21:53.551 } 00:21:53.551 ] 00:21:53.551 }' 00:21:53.551 12:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:53.551 12:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:53.552 12:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:53.552 12:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:53.552 12:20:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:54.923 12:20:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:54.923 12:20:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:54.923 12:20:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:54.923 12:20:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:54.923 12:20:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:54.923 12:20:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:54.923 12:20:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:54.923 12:20:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.923 12:20:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:54.923 12:20:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.923 12:20:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.923 12:20:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:54.923 "name": "raid_bdev1", 00:21:54.923 "uuid": "e2680e6c-5d31-4b5f-913a-58601bd5ca2f", 00:21:54.923 "strip_size_kb": 64, 00:21:54.923 "state": "online", 00:21:54.923 "raid_level": "raid5f", 00:21:54.923 "superblock": false, 00:21:54.923 "num_base_bdevs": 4, 00:21:54.923 "num_base_bdevs_discovered": 4, 00:21:54.923 "num_base_bdevs_operational": 4, 00:21:54.923 "process": { 00:21:54.923 "type": "rebuild", 00:21:54.923 "target": "spare", 00:21:54.923 "progress": { 00:21:54.923 "blocks": 44160, 00:21:54.923 "percent": 22 00:21:54.923 } 00:21:54.923 }, 00:21:54.923 "base_bdevs_list": [ 00:21:54.923 { 00:21:54.923 "name": "spare", 00:21:54.923 "uuid": "edd03535-3c0d-5b6f-a49a-6ae031d3b0eb", 00:21:54.923 "is_configured": true, 00:21:54.923 "data_offset": 0, 00:21:54.923 "data_size": 65536 00:21:54.923 }, 00:21:54.923 { 00:21:54.923 "name": "BaseBdev2", 00:21:54.923 "uuid": "6d144668-16c7-52eb-bab1-385e8e0f3e00", 00:21:54.923 "is_configured": true, 00:21:54.923 "data_offset": 0, 00:21:54.923 "data_size": 65536 00:21:54.923 }, 00:21:54.923 { 00:21:54.923 "name": "BaseBdev3", 00:21:54.923 "uuid": "95bdea9a-4ae3-5f1a-b9e6-9c2990d15699", 00:21:54.923 "is_configured": true, 00:21:54.923 "data_offset": 0, 00:21:54.923 "data_size": 65536 00:21:54.923 }, 00:21:54.923 { 00:21:54.923 "name": "BaseBdev4", 00:21:54.923 "uuid": "7d869b8d-b8b8-544e-a67e-263a810ef734", 00:21:54.923 "is_configured": true, 00:21:54.923 "data_offset": 0, 00:21:54.923 "data_size": 65536 00:21:54.923 } 00:21:54.923 ] 00:21:54.923 }' 00:21:54.923 12:20:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:54.923 12:20:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:54.923 12:20:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:54.923 12:20:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:54.923 12:20:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:55.857 12:20:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:55.857 12:20:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:55.857 12:20:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:55.857 12:20:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:55.857 12:20:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:55.857 12:20:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:55.857 12:20:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:55.857 12:20:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:55.857 12:20:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.857 12:20:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.857 12:20:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.857 12:20:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:55.857 "name": "raid_bdev1", 00:21:55.857 "uuid": "e2680e6c-5d31-4b5f-913a-58601bd5ca2f", 00:21:55.857 "strip_size_kb": 64, 00:21:55.857 "state": "online", 00:21:55.857 "raid_level": "raid5f", 00:21:55.857 "superblock": false, 00:21:55.857 "num_base_bdevs": 4, 00:21:55.857 "num_base_bdevs_discovered": 4, 00:21:55.857 "num_base_bdevs_operational": 4, 00:21:55.857 "process": { 00:21:55.857 "type": "rebuild", 00:21:55.857 "target": "spare", 00:21:55.857 "progress": { 00:21:55.857 "blocks": 65280, 00:21:55.857 "percent": 33 00:21:55.857 } 00:21:55.857 }, 00:21:55.857 "base_bdevs_list": [ 00:21:55.857 { 00:21:55.857 "name": "spare", 00:21:55.857 "uuid": "edd03535-3c0d-5b6f-a49a-6ae031d3b0eb", 00:21:55.857 "is_configured": true, 00:21:55.857 "data_offset": 0, 00:21:55.857 "data_size": 65536 00:21:55.857 }, 00:21:55.857 { 00:21:55.857 "name": "BaseBdev2", 00:21:55.857 "uuid": "6d144668-16c7-52eb-bab1-385e8e0f3e00", 00:21:55.857 "is_configured": true, 00:21:55.857 "data_offset": 0, 00:21:55.857 "data_size": 65536 00:21:55.857 }, 00:21:55.857 { 00:21:55.857 "name": "BaseBdev3", 00:21:55.857 "uuid": "95bdea9a-4ae3-5f1a-b9e6-9c2990d15699", 00:21:55.857 "is_configured": true, 00:21:55.857 "data_offset": 0, 00:21:55.857 "data_size": 65536 00:21:55.857 }, 00:21:55.857 { 00:21:55.857 "name": "BaseBdev4", 00:21:55.857 "uuid": "7d869b8d-b8b8-544e-a67e-263a810ef734", 00:21:55.857 "is_configured": true, 00:21:55.857 "data_offset": 0, 00:21:55.857 "data_size": 65536 00:21:55.857 } 00:21:55.857 ] 00:21:55.857 }' 00:21:55.857 12:20:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:55.857 12:20:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:55.857 12:20:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:56.115 12:20:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:56.115 12:20:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:57.070 12:20:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:57.070 12:20:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:57.070 12:20:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:57.070 12:20:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:57.070 12:20:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:57.070 12:20:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:57.070 12:20:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:57.070 12:20:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:57.070 12:20:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.070 12:20:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.070 12:20:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.070 12:20:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:57.070 "name": "raid_bdev1", 00:21:57.070 "uuid": "e2680e6c-5d31-4b5f-913a-58601bd5ca2f", 00:21:57.070 "strip_size_kb": 64, 00:21:57.070 "state": "online", 00:21:57.070 "raid_level": "raid5f", 00:21:57.070 "superblock": false, 00:21:57.070 "num_base_bdevs": 4, 00:21:57.070 "num_base_bdevs_discovered": 4, 00:21:57.070 "num_base_bdevs_operational": 4, 00:21:57.070 "process": { 00:21:57.070 "type": "rebuild", 00:21:57.070 "target": "spare", 00:21:57.070 "progress": { 00:21:57.070 "blocks": 88320, 00:21:57.070 "percent": 44 00:21:57.070 } 00:21:57.070 }, 00:21:57.070 "base_bdevs_list": [ 00:21:57.070 { 00:21:57.070 "name": "spare", 00:21:57.070 "uuid": "edd03535-3c0d-5b6f-a49a-6ae031d3b0eb", 00:21:57.070 "is_configured": true, 00:21:57.070 "data_offset": 0, 00:21:57.070 "data_size": 65536 00:21:57.070 }, 00:21:57.070 { 00:21:57.070 "name": "BaseBdev2", 00:21:57.070 "uuid": "6d144668-16c7-52eb-bab1-385e8e0f3e00", 00:21:57.070 "is_configured": true, 00:21:57.070 "data_offset": 0, 00:21:57.070 "data_size": 65536 00:21:57.070 }, 00:21:57.070 { 00:21:57.070 "name": "BaseBdev3", 00:21:57.070 "uuid": "95bdea9a-4ae3-5f1a-b9e6-9c2990d15699", 00:21:57.070 "is_configured": true, 00:21:57.070 "data_offset": 0, 00:21:57.070 "data_size": 65536 00:21:57.070 }, 00:21:57.070 { 00:21:57.070 "name": "BaseBdev4", 00:21:57.070 "uuid": "7d869b8d-b8b8-544e-a67e-263a810ef734", 00:21:57.070 "is_configured": true, 00:21:57.070 "data_offset": 0, 00:21:57.070 "data_size": 65536 00:21:57.070 } 00:21:57.070 ] 00:21:57.070 }' 00:21:57.070 12:20:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:57.070 12:20:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:57.070 12:20:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:57.070 12:20:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:57.070 12:20:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:58.448 12:20:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:58.448 12:20:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:58.448 12:20:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:58.448 12:20:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:58.448 12:20:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:58.448 12:20:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:58.448 12:20:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:58.448 12:20:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:58.448 12:20:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.448 12:20:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.448 12:20:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.448 12:20:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:58.448 "name": "raid_bdev1", 00:21:58.448 "uuid": "e2680e6c-5d31-4b5f-913a-58601bd5ca2f", 00:21:58.448 "strip_size_kb": 64, 00:21:58.448 "state": "online", 00:21:58.448 "raid_level": "raid5f", 00:21:58.448 "superblock": false, 00:21:58.448 "num_base_bdevs": 4, 00:21:58.448 "num_base_bdevs_discovered": 4, 00:21:58.448 "num_base_bdevs_operational": 4, 00:21:58.448 "process": { 00:21:58.448 "type": "rebuild", 00:21:58.448 "target": "spare", 00:21:58.448 "progress": { 00:21:58.448 "blocks": 109440, 00:21:58.448 "percent": 55 00:21:58.448 } 00:21:58.448 }, 00:21:58.448 "base_bdevs_list": [ 00:21:58.448 { 00:21:58.448 "name": "spare", 00:21:58.448 "uuid": "edd03535-3c0d-5b6f-a49a-6ae031d3b0eb", 00:21:58.448 "is_configured": true, 00:21:58.448 "data_offset": 0, 00:21:58.448 "data_size": 65536 00:21:58.448 }, 00:21:58.448 { 00:21:58.448 "name": "BaseBdev2", 00:21:58.448 "uuid": "6d144668-16c7-52eb-bab1-385e8e0f3e00", 00:21:58.448 "is_configured": true, 00:21:58.448 "data_offset": 0, 00:21:58.448 "data_size": 65536 00:21:58.448 }, 00:21:58.448 { 00:21:58.448 "name": "BaseBdev3", 00:21:58.448 "uuid": "95bdea9a-4ae3-5f1a-b9e6-9c2990d15699", 00:21:58.448 "is_configured": true, 00:21:58.448 "data_offset": 0, 00:21:58.448 "data_size": 65536 00:21:58.448 }, 00:21:58.448 { 00:21:58.448 "name": "BaseBdev4", 00:21:58.448 "uuid": "7d869b8d-b8b8-544e-a67e-263a810ef734", 00:21:58.448 "is_configured": true, 00:21:58.448 "data_offset": 0, 00:21:58.448 "data_size": 65536 00:21:58.448 } 00:21:58.448 ] 00:21:58.448 }' 00:21:58.448 12:20:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:58.448 12:20:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:58.448 12:20:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:58.448 12:20:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:58.448 12:20:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:59.385 12:20:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:59.385 12:20:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:59.385 12:20:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:59.385 12:20:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:59.385 12:20:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:59.385 12:20:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:59.385 12:20:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:59.385 12:20:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.385 12:20:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.385 12:20:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:59.385 12:20:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.385 12:20:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:59.385 "name": "raid_bdev1", 00:21:59.385 "uuid": "e2680e6c-5d31-4b5f-913a-58601bd5ca2f", 00:21:59.385 "strip_size_kb": 64, 00:21:59.385 "state": "online", 00:21:59.385 "raid_level": "raid5f", 00:21:59.385 "superblock": false, 00:21:59.385 "num_base_bdevs": 4, 00:21:59.385 "num_base_bdevs_discovered": 4, 00:21:59.385 "num_base_bdevs_operational": 4, 00:21:59.385 "process": { 00:21:59.385 "type": "rebuild", 00:21:59.385 "target": "spare", 00:21:59.385 "progress": { 00:21:59.385 "blocks": 132480, 00:21:59.385 "percent": 67 00:21:59.385 } 00:21:59.385 }, 00:21:59.385 "base_bdevs_list": [ 00:21:59.385 { 00:21:59.385 "name": "spare", 00:21:59.385 "uuid": "edd03535-3c0d-5b6f-a49a-6ae031d3b0eb", 00:21:59.385 "is_configured": true, 00:21:59.385 "data_offset": 0, 00:21:59.385 "data_size": 65536 00:21:59.385 }, 00:21:59.385 { 00:21:59.385 "name": "BaseBdev2", 00:21:59.385 "uuid": "6d144668-16c7-52eb-bab1-385e8e0f3e00", 00:21:59.385 "is_configured": true, 00:21:59.385 "data_offset": 0, 00:21:59.385 "data_size": 65536 00:21:59.385 }, 00:21:59.385 { 00:21:59.385 "name": "BaseBdev3", 00:21:59.385 "uuid": "95bdea9a-4ae3-5f1a-b9e6-9c2990d15699", 00:21:59.385 "is_configured": true, 00:21:59.385 "data_offset": 0, 00:21:59.385 "data_size": 65536 00:21:59.385 }, 00:21:59.385 { 00:21:59.385 "name": "BaseBdev4", 00:21:59.385 "uuid": "7d869b8d-b8b8-544e-a67e-263a810ef734", 00:21:59.385 "is_configured": true, 00:21:59.385 "data_offset": 0, 00:21:59.385 "data_size": 65536 00:21:59.385 } 00:21:59.385 ] 00:21:59.385 }' 00:21:59.385 12:20:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:59.385 12:20:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:59.385 12:20:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:59.385 12:20:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:59.385 12:20:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:00.761 12:20:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:00.761 12:20:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:00.761 12:20:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:00.761 12:20:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:00.761 12:20:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:00.761 12:20:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:00.761 12:20:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:00.761 12:20:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:00.761 12:20:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.761 12:20:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.761 12:20:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.761 12:20:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:00.761 "name": "raid_bdev1", 00:22:00.761 "uuid": "e2680e6c-5d31-4b5f-913a-58601bd5ca2f", 00:22:00.761 "strip_size_kb": 64, 00:22:00.761 "state": "online", 00:22:00.761 "raid_level": "raid5f", 00:22:00.761 "superblock": false, 00:22:00.761 "num_base_bdevs": 4, 00:22:00.761 "num_base_bdevs_discovered": 4, 00:22:00.761 "num_base_bdevs_operational": 4, 00:22:00.761 "process": { 00:22:00.761 "type": "rebuild", 00:22:00.761 "target": "spare", 00:22:00.761 "progress": { 00:22:00.761 "blocks": 153600, 00:22:00.761 "percent": 78 00:22:00.761 } 00:22:00.761 }, 00:22:00.761 "base_bdevs_list": [ 00:22:00.761 { 00:22:00.761 "name": "spare", 00:22:00.761 "uuid": "edd03535-3c0d-5b6f-a49a-6ae031d3b0eb", 00:22:00.761 "is_configured": true, 00:22:00.761 "data_offset": 0, 00:22:00.761 "data_size": 65536 00:22:00.761 }, 00:22:00.761 { 00:22:00.761 "name": "BaseBdev2", 00:22:00.761 "uuid": "6d144668-16c7-52eb-bab1-385e8e0f3e00", 00:22:00.761 "is_configured": true, 00:22:00.761 "data_offset": 0, 00:22:00.761 "data_size": 65536 00:22:00.761 }, 00:22:00.761 { 00:22:00.761 "name": "BaseBdev3", 00:22:00.761 "uuid": "95bdea9a-4ae3-5f1a-b9e6-9c2990d15699", 00:22:00.761 "is_configured": true, 00:22:00.761 "data_offset": 0, 00:22:00.761 "data_size": 65536 00:22:00.761 }, 00:22:00.761 { 00:22:00.761 "name": "BaseBdev4", 00:22:00.761 "uuid": "7d869b8d-b8b8-544e-a67e-263a810ef734", 00:22:00.761 "is_configured": true, 00:22:00.761 "data_offset": 0, 00:22:00.761 "data_size": 65536 00:22:00.761 } 00:22:00.761 ] 00:22:00.761 }' 00:22:00.761 12:20:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:00.761 12:20:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:00.761 12:20:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:00.761 12:20:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:00.761 12:20:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:01.787 12:20:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:01.787 12:20:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:01.787 12:20:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:01.787 12:20:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:01.787 12:20:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:01.787 12:20:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:01.787 12:20:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:01.787 12:20:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:01.787 12:20:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.787 12:20:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.787 12:20:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.787 12:20:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:01.787 "name": "raid_bdev1", 00:22:01.787 "uuid": "e2680e6c-5d31-4b5f-913a-58601bd5ca2f", 00:22:01.787 "strip_size_kb": 64, 00:22:01.787 "state": "online", 00:22:01.787 "raid_level": "raid5f", 00:22:01.787 "superblock": false, 00:22:01.787 "num_base_bdevs": 4, 00:22:01.787 "num_base_bdevs_discovered": 4, 00:22:01.787 "num_base_bdevs_operational": 4, 00:22:01.787 "process": { 00:22:01.787 "type": "rebuild", 00:22:01.787 "target": "spare", 00:22:01.787 "progress": { 00:22:01.787 "blocks": 176640, 00:22:01.787 "percent": 89 00:22:01.787 } 00:22:01.787 }, 00:22:01.787 "base_bdevs_list": [ 00:22:01.787 { 00:22:01.787 "name": "spare", 00:22:01.787 "uuid": "edd03535-3c0d-5b6f-a49a-6ae031d3b0eb", 00:22:01.787 "is_configured": true, 00:22:01.787 "data_offset": 0, 00:22:01.787 "data_size": 65536 00:22:01.787 }, 00:22:01.787 { 00:22:01.787 "name": "BaseBdev2", 00:22:01.787 "uuid": "6d144668-16c7-52eb-bab1-385e8e0f3e00", 00:22:01.787 "is_configured": true, 00:22:01.787 "data_offset": 0, 00:22:01.787 "data_size": 65536 00:22:01.787 }, 00:22:01.787 { 00:22:01.787 "name": "BaseBdev3", 00:22:01.787 "uuid": "95bdea9a-4ae3-5f1a-b9e6-9c2990d15699", 00:22:01.787 "is_configured": true, 00:22:01.787 "data_offset": 0, 00:22:01.787 "data_size": 65536 00:22:01.787 }, 00:22:01.787 { 00:22:01.787 "name": "BaseBdev4", 00:22:01.787 "uuid": "7d869b8d-b8b8-544e-a67e-263a810ef734", 00:22:01.787 "is_configured": true, 00:22:01.787 "data_offset": 0, 00:22:01.787 "data_size": 65536 00:22:01.787 } 00:22:01.787 ] 00:22:01.787 }' 00:22:01.787 12:20:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:01.787 12:20:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:01.787 12:20:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:01.787 12:20:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:01.787 12:20:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:02.722 [2024-11-25 12:20:58.701242] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:02.722 [2024-11-25 12:20:58.701354] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:02.722 [2024-11-25 12:20:58.701428] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:02.722 12:20:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:02.722 12:20:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:02.723 12:20:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:02.723 12:20:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:02.723 12:20:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:02.723 12:20:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:02.723 12:20:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:02.723 12:20:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.723 12:20:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:02.723 12:20:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:02.723 12:20:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.980 12:20:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:02.980 "name": "raid_bdev1", 00:22:02.980 "uuid": "e2680e6c-5d31-4b5f-913a-58601bd5ca2f", 00:22:02.980 "strip_size_kb": 64, 00:22:02.980 "state": "online", 00:22:02.980 "raid_level": "raid5f", 00:22:02.980 "superblock": false, 00:22:02.980 "num_base_bdevs": 4, 00:22:02.980 "num_base_bdevs_discovered": 4, 00:22:02.980 "num_base_bdevs_operational": 4, 00:22:02.980 "base_bdevs_list": [ 00:22:02.980 { 00:22:02.980 "name": "spare", 00:22:02.980 "uuid": "edd03535-3c0d-5b6f-a49a-6ae031d3b0eb", 00:22:02.980 "is_configured": true, 00:22:02.980 "data_offset": 0, 00:22:02.980 "data_size": 65536 00:22:02.980 }, 00:22:02.980 { 00:22:02.980 "name": "BaseBdev2", 00:22:02.980 "uuid": "6d144668-16c7-52eb-bab1-385e8e0f3e00", 00:22:02.980 "is_configured": true, 00:22:02.980 "data_offset": 0, 00:22:02.980 "data_size": 65536 00:22:02.980 }, 00:22:02.980 { 00:22:02.980 "name": "BaseBdev3", 00:22:02.980 "uuid": "95bdea9a-4ae3-5f1a-b9e6-9c2990d15699", 00:22:02.981 "is_configured": true, 00:22:02.981 "data_offset": 0, 00:22:02.981 "data_size": 65536 00:22:02.981 }, 00:22:02.981 { 00:22:02.981 "name": "BaseBdev4", 00:22:02.981 "uuid": "7d869b8d-b8b8-544e-a67e-263a810ef734", 00:22:02.981 "is_configured": true, 00:22:02.981 "data_offset": 0, 00:22:02.981 "data_size": 65536 00:22:02.981 } 00:22:02.981 ] 00:22:02.981 }' 00:22:02.981 12:20:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:02.981 12:20:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:02.981 12:20:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:02.981 12:20:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:22:02.981 12:20:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:22:02.981 12:20:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:02.981 12:20:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:02.981 12:20:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:02.981 12:20:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:02.981 12:20:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:02.981 12:20:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:02.981 12:20:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.981 12:20:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:02.981 12:20:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:02.981 12:20:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.981 12:20:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:02.981 "name": "raid_bdev1", 00:22:02.981 "uuid": "e2680e6c-5d31-4b5f-913a-58601bd5ca2f", 00:22:02.981 "strip_size_kb": 64, 00:22:02.981 "state": "online", 00:22:02.981 "raid_level": "raid5f", 00:22:02.981 "superblock": false, 00:22:02.981 "num_base_bdevs": 4, 00:22:02.981 "num_base_bdevs_discovered": 4, 00:22:02.981 "num_base_bdevs_operational": 4, 00:22:02.981 "base_bdevs_list": [ 00:22:02.981 { 00:22:02.981 "name": "spare", 00:22:02.981 "uuid": "edd03535-3c0d-5b6f-a49a-6ae031d3b0eb", 00:22:02.981 "is_configured": true, 00:22:02.981 "data_offset": 0, 00:22:02.981 "data_size": 65536 00:22:02.981 }, 00:22:02.981 { 00:22:02.981 "name": "BaseBdev2", 00:22:02.981 "uuid": "6d144668-16c7-52eb-bab1-385e8e0f3e00", 00:22:02.981 "is_configured": true, 00:22:02.981 "data_offset": 0, 00:22:02.981 "data_size": 65536 00:22:02.981 }, 00:22:02.981 { 00:22:02.981 "name": "BaseBdev3", 00:22:02.981 "uuid": "95bdea9a-4ae3-5f1a-b9e6-9c2990d15699", 00:22:02.981 "is_configured": true, 00:22:02.981 "data_offset": 0, 00:22:02.981 "data_size": 65536 00:22:02.981 }, 00:22:02.981 { 00:22:02.981 "name": "BaseBdev4", 00:22:02.981 "uuid": "7d869b8d-b8b8-544e-a67e-263a810ef734", 00:22:02.981 "is_configured": true, 00:22:02.981 "data_offset": 0, 00:22:02.981 "data_size": 65536 00:22:02.981 } 00:22:02.981 ] 00:22:02.981 }' 00:22:02.981 12:20:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:02.981 12:20:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:02.981 12:20:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:03.239 12:20:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:03.239 12:20:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:22:03.239 12:20:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:03.239 12:20:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:03.239 12:20:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:03.239 12:20:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:03.239 12:20:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:03.239 12:20:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:03.239 12:20:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:03.239 12:20:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:03.239 12:20:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:03.239 12:20:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:03.239 12:20:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:03.239 12:20:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.239 12:20:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.239 12:20:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.239 12:20:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:03.239 "name": "raid_bdev1", 00:22:03.239 "uuid": "e2680e6c-5d31-4b5f-913a-58601bd5ca2f", 00:22:03.239 "strip_size_kb": 64, 00:22:03.239 "state": "online", 00:22:03.239 "raid_level": "raid5f", 00:22:03.239 "superblock": false, 00:22:03.239 "num_base_bdevs": 4, 00:22:03.239 "num_base_bdevs_discovered": 4, 00:22:03.239 "num_base_bdevs_operational": 4, 00:22:03.239 "base_bdevs_list": [ 00:22:03.239 { 00:22:03.239 "name": "spare", 00:22:03.239 "uuid": "edd03535-3c0d-5b6f-a49a-6ae031d3b0eb", 00:22:03.239 "is_configured": true, 00:22:03.239 "data_offset": 0, 00:22:03.239 "data_size": 65536 00:22:03.239 }, 00:22:03.239 { 00:22:03.239 "name": "BaseBdev2", 00:22:03.239 "uuid": "6d144668-16c7-52eb-bab1-385e8e0f3e00", 00:22:03.239 "is_configured": true, 00:22:03.239 "data_offset": 0, 00:22:03.239 "data_size": 65536 00:22:03.239 }, 00:22:03.239 { 00:22:03.239 "name": "BaseBdev3", 00:22:03.239 "uuid": "95bdea9a-4ae3-5f1a-b9e6-9c2990d15699", 00:22:03.239 "is_configured": true, 00:22:03.239 "data_offset": 0, 00:22:03.239 "data_size": 65536 00:22:03.239 }, 00:22:03.239 { 00:22:03.239 "name": "BaseBdev4", 00:22:03.239 "uuid": "7d869b8d-b8b8-544e-a67e-263a810ef734", 00:22:03.239 "is_configured": true, 00:22:03.239 "data_offset": 0, 00:22:03.239 "data_size": 65536 00:22:03.239 } 00:22:03.239 ] 00:22:03.239 }' 00:22:03.239 12:20:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:03.239 12:20:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.804 12:20:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:03.804 12:20:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.804 12:20:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.804 [2024-11-25 12:20:59.604391] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:03.804 [2024-11-25 12:20:59.604442] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:03.804 [2024-11-25 12:20:59.604548] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:03.804 [2024-11-25 12:20:59.604677] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:03.804 [2024-11-25 12:20:59.604696] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:03.804 12:20:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.804 12:20:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:03.804 12:20:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.804 12:20:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:22:03.804 12:20:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.804 12:20:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.804 12:20:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:22:03.804 12:20:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:22:03.804 12:20:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:22:03.804 12:20:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:03.804 12:20:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:22:03.804 12:20:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:22:03.804 12:20:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:03.804 12:20:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:03.804 12:20:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:03.804 12:20:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:22:03.804 12:20:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:03.804 12:20:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:03.805 12:20:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:22:04.063 /dev/nbd0 00:22:04.063 12:20:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:04.063 12:20:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:04.063 12:20:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:04.063 12:20:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:22:04.063 12:20:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:04.063 12:20:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:04.063 12:20:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:04.063 12:20:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:22:04.063 12:20:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:04.063 12:20:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:04.063 12:20:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:04.063 1+0 records in 00:22:04.063 1+0 records out 00:22:04.063 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00326244 s, 1.3 MB/s 00:22:04.063 12:20:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:04.063 12:20:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:22:04.063 12:20:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:04.063 12:20:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:04.063 12:20:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:22:04.063 12:20:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:04.063 12:20:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:04.063 12:20:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:22:04.322 /dev/nbd1 00:22:04.322 12:21:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:04.322 12:21:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:04.322 12:21:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:22:04.322 12:21:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:22:04.322 12:21:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:04.322 12:21:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:04.322 12:21:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:22:04.322 12:21:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:22:04.322 12:21:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:04.322 12:21:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:04.322 12:21:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:04.322 1+0 records in 00:22:04.322 1+0 records out 00:22:04.322 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000308832 s, 13.3 MB/s 00:22:04.322 12:21:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:04.322 12:21:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:22:04.322 12:21:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:04.322 12:21:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:04.322 12:21:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:22:04.322 12:21:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:04.322 12:21:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:04.322 12:21:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:22:04.581 12:21:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:22:04.581 12:21:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:04.581 12:21:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:04.581 12:21:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:04.581 12:21:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:22:04.581 12:21:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:04.581 12:21:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:22:04.839 12:21:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:04.839 12:21:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:04.839 12:21:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:04.839 12:21:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:04.839 12:21:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:04.839 12:21:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:04.839 12:21:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:22:04.839 12:21:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:22:04.839 12:21:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:04.839 12:21:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:22:05.189 12:21:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:05.189 12:21:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:05.189 12:21:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:05.189 12:21:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:05.189 12:21:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:05.189 12:21:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:05.189 12:21:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:22:05.189 12:21:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:22:05.189 12:21:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:22:05.189 12:21:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 85008 00:22:05.189 12:21:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 85008 ']' 00:22:05.189 12:21:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 85008 00:22:05.189 12:21:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:22:05.189 12:21:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:05.189 12:21:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85008 00:22:05.189 killing process with pid 85008 00:22:05.189 Received shutdown signal, test time was about 60.000000 seconds 00:22:05.189 00:22:05.189 Latency(us) 00:22:05.189 [2024-11-25T12:21:01.280Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:05.189 [2024-11-25T12:21:01.280Z] =================================================================================================================== 00:22:05.189 [2024-11-25T12:21:01.280Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:05.189 12:21:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:05.189 12:21:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:05.189 12:21:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85008' 00:22:05.189 12:21:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 85008 00:22:05.189 [2024-11-25 12:21:01.159894] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:05.189 12:21:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 85008 00:22:05.773 [2024-11-25 12:21:01.602407] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:06.708 12:21:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:22:06.708 00:22:06.708 real 0m20.133s 00:22:06.708 user 0m25.107s 00:22:06.708 sys 0m2.242s 00:22:06.708 12:21:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:06.708 12:21:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:06.708 ************************************ 00:22:06.708 END TEST raid5f_rebuild_test 00:22:06.708 ************************************ 00:22:06.708 12:21:02 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:22:06.708 12:21:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:22:06.708 12:21:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:06.708 12:21:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:06.708 ************************************ 00:22:06.708 START TEST raid5f_rebuild_test_sb 00:22:06.708 ************************************ 00:22:06.708 12:21:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:22:06.708 12:21:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:22:06.708 12:21:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:22:06.708 12:21:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:22:06.708 12:21:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:22:06.708 12:21:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:22:06.708 12:21:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:22:06.708 12:21:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:06.708 12:21:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:22:06.708 12:21:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:06.708 12:21:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:06.708 12:21:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:22:06.708 12:21:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:06.708 12:21:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:06.708 12:21:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:22:06.708 12:21:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:06.708 12:21:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:06.708 12:21:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:22:06.708 12:21:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:06.708 12:21:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:06.708 12:21:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:06.708 12:21:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:22:06.708 12:21:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:22:06.708 12:21:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:22:06.708 12:21:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:22:06.708 12:21:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:22:06.708 12:21:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:22:06.708 12:21:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:22:06.708 12:21:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:22:06.708 12:21:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:22:06.708 12:21:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:22:06.708 12:21:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:22:06.708 12:21:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:22:06.708 12:21:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85517 00:22:06.708 12:21:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85517 00:22:06.708 12:21:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 85517 ']' 00:22:06.708 12:21:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:06.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:06.708 12:21:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:06.708 12:21:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:06.708 12:21:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:06.708 12:21:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:06.708 12:21:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.967 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:06.967 Zero copy mechanism will not be used. 00:22:06.967 [2024-11-25 12:21:02.798774] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:22:06.967 [2024-11-25 12:21:02.798955] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85517 ] 00:22:06.967 [2024-11-25 12:21:02.991940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:07.233 [2024-11-25 12:21:03.145052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:07.493 [2024-11-25 12:21:03.369161] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:07.493 [2024-11-25 12:21:03.369243] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:08.060 12:21:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:08.060 12:21:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:22:08.060 12:21:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:08.060 12:21:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:08.060 12:21:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.060 12:21:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.060 BaseBdev1_malloc 00:22:08.060 12:21:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.060 12:21:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:08.060 12:21:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.060 12:21:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.060 [2024-11-25 12:21:03.926701] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:08.060 [2024-11-25 12:21:03.926796] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:08.060 [2024-11-25 12:21:03.926846] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:08.060 [2024-11-25 12:21:03.926865] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:08.060 [2024-11-25 12:21:03.929708] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:08.060 [2024-11-25 12:21:03.929926] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:08.060 BaseBdev1 00:22:08.060 12:21:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.060 12:21:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:08.060 12:21:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:08.060 12:21:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.060 12:21:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.060 BaseBdev2_malloc 00:22:08.060 12:21:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.060 12:21:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:08.060 12:21:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.061 12:21:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.061 [2024-11-25 12:21:03.983064] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:08.061 [2024-11-25 12:21:03.983281] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:08.061 [2024-11-25 12:21:03.983322] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:08.061 [2024-11-25 12:21:03.983363] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:08.061 [2024-11-25 12:21:03.986092] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:08.061 [2024-11-25 12:21:03.986144] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:08.061 BaseBdev2 00:22:08.061 12:21:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.061 12:21:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:08.061 12:21:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:08.061 12:21:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.061 12:21:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.061 BaseBdev3_malloc 00:22:08.061 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.061 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:22:08.061 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.061 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.061 [2024-11-25 12:21:04.048418] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:22:08.061 [2024-11-25 12:21:04.048653] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:08.061 [2024-11-25 12:21:04.048705] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:22:08.061 [2024-11-25 12:21:04.048727] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:08.061 [2024-11-25 12:21:04.051538] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:08.061 [2024-11-25 12:21:04.051591] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:08.061 BaseBdev3 00:22:08.061 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.061 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:08.061 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:22:08.061 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.061 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.061 BaseBdev4_malloc 00:22:08.061 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.061 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:22:08.061 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.061 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.061 [2024-11-25 12:21:04.100552] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:22:08.061 [2024-11-25 12:21:04.100635] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:08.061 [2024-11-25 12:21:04.100675] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:22:08.061 [2024-11-25 12:21:04.100693] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:08.061 [2024-11-25 12:21:04.103471] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:08.061 [2024-11-25 12:21:04.103531] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:08.061 BaseBdev4 00:22:08.061 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.061 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:22:08.061 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.061 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.061 spare_malloc 00:22:08.061 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.061 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:08.061 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.061 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.319 spare_delay 00:22:08.319 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.319 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:08.319 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.319 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.319 [2024-11-25 12:21:04.164548] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:08.319 [2024-11-25 12:21:04.164627] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:08.319 [2024-11-25 12:21:04.164668] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:22:08.319 [2024-11-25 12:21:04.164686] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:08.319 [2024-11-25 12:21:04.167440] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:08.319 [2024-11-25 12:21:04.167492] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:08.319 spare 00:22:08.319 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.319 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:22:08.319 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.319 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.319 [2024-11-25 12:21:04.172613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:08.319 [2024-11-25 12:21:04.175030] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:08.319 [2024-11-25 12:21:04.175253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:08.319 [2024-11-25 12:21:04.175390] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:08.319 [2024-11-25 12:21:04.175665] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:08.319 [2024-11-25 12:21:04.175691] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:22:08.319 [2024-11-25 12:21:04.176006] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:22:08.319 [2024-11-25 12:21:04.182866] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:08.319 [2024-11-25 12:21:04.183002] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:08.319 [2024-11-25 12:21:04.183447] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:08.319 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.319 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:22:08.319 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:08.319 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:08.319 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:08.319 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:08.319 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:08.319 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:08.319 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:08.320 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:08.320 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:08.320 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:08.320 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.320 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:08.320 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.320 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.320 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:08.320 "name": "raid_bdev1", 00:22:08.320 "uuid": "77b4891a-6486-4810-888c-1f45d89c390f", 00:22:08.320 "strip_size_kb": 64, 00:22:08.320 "state": "online", 00:22:08.320 "raid_level": "raid5f", 00:22:08.320 "superblock": true, 00:22:08.320 "num_base_bdevs": 4, 00:22:08.320 "num_base_bdevs_discovered": 4, 00:22:08.320 "num_base_bdevs_operational": 4, 00:22:08.320 "base_bdevs_list": [ 00:22:08.320 { 00:22:08.320 "name": "BaseBdev1", 00:22:08.320 "uuid": "df297fda-0811-51f3-a63b-73129b9bf217", 00:22:08.320 "is_configured": true, 00:22:08.320 "data_offset": 2048, 00:22:08.320 "data_size": 63488 00:22:08.320 }, 00:22:08.320 { 00:22:08.320 "name": "BaseBdev2", 00:22:08.320 "uuid": "43e6209a-5435-5e17-b507-96ea5793429c", 00:22:08.320 "is_configured": true, 00:22:08.320 "data_offset": 2048, 00:22:08.320 "data_size": 63488 00:22:08.320 }, 00:22:08.320 { 00:22:08.320 "name": "BaseBdev3", 00:22:08.320 "uuid": "1c671a45-e868-53c5-8e23-2a24fc333276", 00:22:08.320 "is_configured": true, 00:22:08.320 "data_offset": 2048, 00:22:08.320 "data_size": 63488 00:22:08.320 }, 00:22:08.320 { 00:22:08.320 "name": "BaseBdev4", 00:22:08.320 "uuid": "1aae62bc-3e99-5514-b80b-e34d7871dcbf", 00:22:08.320 "is_configured": true, 00:22:08.320 "data_offset": 2048, 00:22:08.320 "data_size": 63488 00:22:08.320 } 00:22:08.320 ] 00:22:08.320 }' 00:22:08.320 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:08.320 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.887 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:22:08.887 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:08.887 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.887 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.887 [2024-11-25 12:21:04.739361] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:08.887 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.887 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:22:08.887 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:08.887 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.887 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.887 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:08.887 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.887 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:22:08.887 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:22:08.887 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:22:08.887 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:22:08.887 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:22:08.887 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:22:08.887 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:22:08.887 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:08.887 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:08.887 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:08.887 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:22:08.887 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:08.887 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:08.887 12:21:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:22:09.146 [2024-11-25 12:21:05.059205] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:22:09.146 /dev/nbd0 00:22:09.146 12:21:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:09.146 12:21:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:09.146 12:21:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:09.146 12:21:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:22:09.146 12:21:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:09.146 12:21:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:09.146 12:21:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:09.146 12:21:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:22:09.146 12:21:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:09.146 12:21:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:09.146 12:21:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:09.146 1+0 records in 00:22:09.146 1+0 records out 00:22:09.146 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000306292 s, 13.4 MB/s 00:22:09.146 12:21:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:09.146 12:21:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:22:09.146 12:21:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:09.146 12:21:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:09.146 12:21:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:22:09.146 12:21:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:09.146 12:21:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:09.146 12:21:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:22:09.146 12:21:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:22:09.146 12:21:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:22:09.146 12:21:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:22:09.714 496+0 records in 00:22:09.714 496+0 records out 00:22:09.714 97517568 bytes (98 MB, 93 MiB) copied, 0.596394 s, 164 MB/s 00:22:09.714 12:21:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:22:09.714 12:21:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:09.714 12:21:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:09.714 12:21:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:09.714 12:21:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:22:09.714 12:21:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:09.714 12:21:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:22:09.972 12:21:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:09.972 [2024-11-25 12:21:06.016162] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:09.972 12:21:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:09.972 12:21:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:09.972 12:21:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:09.972 12:21:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:09.972 12:21:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:09.972 12:21:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:22:09.972 12:21:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:22:09.972 12:21:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:22:09.972 12:21:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.972 12:21:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.972 [2024-11-25 12:21:06.039900] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:09.972 12:21:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.972 12:21:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:09.972 12:21:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:09.972 12:21:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:09.972 12:21:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:09.972 12:21:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:09.973 12:21:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:09.973 12:21:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:09.973 12:21:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:09.973 12:21:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:09.973 12:21:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:09.973 12:21:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:09.973 12:21:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.973 12:21:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:09.973 12:21:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.232 12:21:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.232 12:21:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:10.232 "name": "raid_bdev1", 00:22:10.232 "uuid": "77b4891a-6486-4810-888c-1f45d89c390f", 00:22:10.232 "strip_size_kb": 64, 00:22:10.232 "state": "online", 00:22:10.232 "raid_level": "raid5f", 00:22:10.232 "superblock": true, 00:22:10.232 "num_base_bdevs": 4, 00:22:10.232 "num_base_bdevs_discovered": 3, 00:22:10.232 "num_base_bdevs_operational": 3, 00:22:10.232 "base_bdevs_list": [ 00:22:10.232 { 00:22:10.232 "name": null, 00:22:10.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:10.232 "is_configured": false, 00:22:10.232 "data_offset": 0, 00:22:10.232 "data_size": 63488 00:22:10.232 }, 00:22:10.232 { 00:22:10.232 "name": "BaseBdev2", 00:22:10.232 "uuid": "43e6209a-5435-5e17-b507-96ea5793429c", 00:22:10.232 "is_configured": true, 00:22:10.232 "data_offset": 2048, 00:22:10.232 "data_size": 63488 00:22:10.232 }, 00:22:10.232 { 00:22:10.232 "name": "BaseBdev3", 00:22:10.232 "uuid": "1c671a45-e868-53c5-8e23-2a24fc333276", 00:22:10.232 "is_configured": true, 00:22:10.232 "data_offset": 2048, 00:22:10.232 "data_size": 63488 00:22:10.232 }, 00:22:10.232 { 00:22:10.232 "name": "BaseBdev4", 00:22:10.232 "uuid": "1aae62bc-3e99-5514-b80b-e34d7871dcbf", 00:22:10.232 "is_configured": true, 00:22:10.232 "data_offset": 2048, 00:22:10.232 "data_size": 63488 00:22:10.232 } 00:22:10.232 ] 00:22:10.232 }' 00:22:10.232 12:21:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:10.232 12:21:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.491 12:21:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:10.491 12:21:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.491 12:21:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.491 [2024-11-25 12:21:06.544043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:10.491 [2024-11-25 12:21:06.558181] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:22:10.491 12:21:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.491 12:21:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:22:10.491 [2024-11-25 12:21:06.567082] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:11.870 12:21:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:11.870 12:21:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:11.870 12:21:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:11.870 12:21:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:11.870 12:21:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:11.870 12:21:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:11.870 12:21:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.870 12:21:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:11.870 12:21:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.870 12:21:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.870 12:21:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:11.870 "name": "raid_bdev1", 00:22:11.870 "uuid": "77b4891a-6486-4810-888c-1f45d89c390f", 00:22:11.870 "strip_size_kb": 64, 00:22:11.870 "state": "online", 00:22:11.870 "raid_level": "raid5f", 00:22:11.870 "superblock": true, 00:22:11.870 "num_base_bdevs": 4, 00:22:11.870 "num_base_bdevs_discovered": 4, 00:22:11.870 "num_base_bdevs_operational": 4, 00:22:11.870 "process": { 00:22:11.870 "type": "rebuild", 00:22:11.870 "target": "spare", 00:22:11.870 "progress": { 00:22:11.870 "blocks": 17280, 00:22:11.870 "percent": 9 00:22:11.870 } 00:22:11.870 }, 00:22:11.870 "base_bdevs_list": [ 00:22:11.870 { 00:22:11.870 "name": "spare", 00:22:11.870 "uuid": "6911bf26-857f-59a1-b605-f1ad818350fe", 00:22:11.870 "is_configured": true, 00:22:11.870 "data_offset": 2048, 00:22:11.870 "data_size": 63488 00:22:11.870 }, 00:22:11.870 { 00:22:11.870 "name": "BaseBdev2", 00:22:11.870 "uuid": "43e6209a-5435-5e17-b507-96ea5793429c", 00:22:11.870 "is_configured": true, 00:22:11.870 "data_offset": 2048, 00:22:11.870 "data_size": 63488 00:22:11.870 }, 00:22:11.870 { 00:22:11.870 "name": "BaseBdev3", 00:22:11.870 "uuid": "1c671a45-e868-53c5-8e23-2a24fc333276", 00:22:11.870 "is_configured": true, 00:22:11.870 "data_offset": 2048, 00:22:11.870 "data_size": 63488 00:22:11.870 }, 00:22:11.870 { 00:22:11.870 "name": "BaseBdev4", 00:22:11.870 "uuid": "1aae62bc-3e99-5514-b80b-e34d7871dcbf", 00:22:11.870 "is_configured": true, 00:22:11.870 "data_offset": 2048, 00:22:11.870 "data_size": 63488 00:22:11.870 } 00:22:11.870 ] 00:22:11.870 }' 00:22:11.870 12:21:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:11.870 12:21:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:11.870 12:21:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:11.870 12:21:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:11.870 12:21:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:22:11.870 12:21:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.870 12:21:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.870 [2024-11-25 12:21:07.720401] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:11.870 [2024-11-25 12:21:07.779558] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:11.870 [2024-11-25 12:21:07.779951] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:11.870 [2024-11-25 12:21:07.780102] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:11.870 [2024-11-25 12:21:07.780163] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:11.870 12:21:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.870 12:21:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:11.870 12:21:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:11.870 12:21:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:11.870 12:21:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:11.870 12:21:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:11.871 12:21:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:11.871 12:21:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:11.871 12:21:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:11.871 12:21:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:11.871 12:21:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:11.871 12:21:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:11.871 12:21:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.871 12:21:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.871 12:21:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:11.871 12:21:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.871 12:21:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:11.871 "name": "raid_bdev1", 00:22:11.871 "uuid": "77b4891a-6486-4810-888c-1f45d89c390f", 00:22:11.871 "strip_size_kb": 64, 00:22:11.871 "state": "online", 00:22:11.871 "raid_level": "raid5f", 00:22:11.871 "superblock": true, 00:22:11.871 "num_base_bdevs": 4, 00:22:11.871 "num_base_bdevs_discovered": 3, 00:22:11.871 "num_base_bdevs_operational": 3, 00:22:11.871 "base_bdevs_list": [ 00:22:11.871 { 00:22:11.871 "name": null, 00:22:11.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:11.871 "is_configured": false, 00:22:11.871 "data_offset": 0, 00:22:11.871 "data_size": 63488 00:22:11.871 }, 00:22:11.871 { 00:22:11.871 "name": "BaseBdev2", 00:22:11.871 "uuid": "43e6209a-5435-5e17-b507-96ea5793429c", 00:22:11.871 "is_configured": true, 00:22:11.871 "data_offset": 2048, 00:22:11.871 "data_size": 63488 00:22:11.871 }, 00:22:11.871 { 00:22:11.871 "name": "BaseBdev3", 00:22:11.871 "uuid": "1c671a45-e868-53c5-8e23-2a24fc333276", 00:22:11.871 "is_configured": true, 00:22:11.871 "data_offset": 2048, 00:22:11.871 "data_size": 63488 00:22:11.871 }, 00:22:11.871 { 00:22:11.871 "name": "BaseBdev4", 00:22:11.871 "uuid": "1aae62bc-3e99-5514-b80b-e34d7871dcbf", 00:22:11.871 "is_configured": true, 00:22:11.871 "data_offset": 2048, 00:22:11.871 "data_size": 63488 00:22:11.871 } 00:22:11.871 ] 00:22:11.871 }' 00:22:11.871 12:21:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:11.871 12:21:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.438 12:21:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:12.438 12:21:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:12.438 12:21:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:12.438 12:21:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:12.438 12:21:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:12.438 12:21:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:12.438 12:21:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.438 12:21:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.438 12:21:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:12.438 12:21:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.438 12:21:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:12.438 "name": "raid_bdev1", 00:22:12.438 "uuid": "77b4891a-6486-4810-888c-1f45d89c390f", 00:22:12.438 "strip_size_kb": 64, 00:22:12.438 "state": "online", 00:22:12.438 "raid_level": "raid5f", 00:22:12.438 "superblock": true, 00:22:12.438 "num_base_bdevs": 4, 00:22:12.438 "num_base_bdevs_discovered": 3, 00:22:12.438 "num_base_bdevs_operational": 3, 00:22:12.438 "base_bdevs_list": [ 00:22:12.438 { 00:22:12.438 "name": null, 00:22:12.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:12.438 "is_configured": false, 00:22:12.438 "data_offset": 0, 00:22:12.438 "data_size": 63488 00:22:12.438 }, 00:22:12.438 { 00:22:12.438 "name": "BaseBdev2", 00:22:12.438 "uuid": "43e6209a-5435-5e17-b507-96ea5793429c", 00:22:12.438 "is_configured": true, 00:22:12.438 "data_offset": 2048, 00:22:12.438 "data_size": 63488 00:22:12.438 }, 00:22:12.438 { 00:22:12.438 "name": "BaseBdev3", 00:22:12.438 "uuid": "1c671a45-e868-53c5-8e23-2a24fc333276", 00:22:12.438 "is_configured": true, 00:22:12.438 "data_offset": 2048, 00:22:12.438 "data_size": 63488 00:22:12.438 }, 00:22:12.438 { 00:22:12.438 "name": "BaseBdev4", 00:22:12.438 "uuid": "1aae62bc-3e99-5514-b80b-e34d7871dcbf", 00:22:12.438 "is_configured": true, 00:22:12.438 "data_offset": 2048, 00:22:12.438 "data_size": 63488 00:22:12.438 } 00:22:12.438 ] 00:22:12.438 }' 00:22:12.438 12:21:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:12.438 12:21:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:12.438 12:21:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:12.438 12:21:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:12.438 12:21:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:12.438 12:21:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.438 12:21:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.438 [2024-11-25 12:21:08.508263] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:12.438 [2024-11-25 12:21:08.522231] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:22:12.438 12:21:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.438 12:21:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:22:12.697 [2024-11-25 12:21:08.531259] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:13.632 12:21:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:13.632 12:21:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:13.632 12:21:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:13.632 12:21:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:13.632 12:21:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:13.632 12:21:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:13.632 12:21:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:13.632 12:21:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.632 12:21:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:13.632 12:21:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.632 12:21:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:13.632 "name": "raid_bdev1", 00:22:13.632 "uuid": "77b4891a-6486-4810-888c-1f45d89c390f", 00:22:13.632 "strip_size_kb": 64, 00:22:13.632 "state": "online", 00:22:13.632 "raid_level": "raid5f", 00:22:13.632 "superblock": true, 00:22:13.632 "num_base_bdevs": 4, 00:22:13.632 "num_base_bdevs_discovered": 4, 00:22:13.632 "num_base_bdevs_operational": 4, 00:22:13.632 "process": { 00:22:13.632 "type": "rebuild", 00:22:13.632 "target": "spare", 00:22:13.632 "progress": { 00:22:13.632 "blocks": 17280, 00:22:13.632 "percent": 9 00:22:13.632 } 00:22:13.632 }, 00:22:13.632 "base_bdevs_list": [ 00:22:13.632 { 00:22:13.632 "name": "spare", 00:22:13.632 "uuid": "6911bf26-857f-59a1-b605-f1ad818350fe", 00:22:13.632 "is_configured": true, 00:22:13.632 "data_offset": 2048, 00:22:13.632 "data_size": 63488 00:22:13.632 }, 00:22:13.632 { 00:22:13.632 "name": "BaseBdev2", 00:22:13.632 "uuid": "43e6209a-5435-5e17-b507-96ea5793429c", 00:22:13.632 "is_configured": true, 00:22:13.632 "data_offset": 2048, 00:22:13.632 "data_size": 63488 00:22:13.632 }, 00:22:13.632 { 00:22:13.632 "name": "BaseBdev3", 00:22:13.632 "uuid": "1c671a45-e868-53c5-8e23-2a24fc333276", 00:22:13.632 "is_configured": true, 00:22:13.632 "data_offset": 2048, 00:22:13.632 "data_size": 63488 00:22:13.632 }, 00:22:13.632 { 00:22:13.632 "name": "BaseBdev4", 00:22:13.632 "uuid": "1aae62bc-3e99-5514-b80b-e34d7871dcbf", 00:22:13.632 "is_configured": true, 00:22:13.632 "data_offset": 2048, 00:22:13.632 "data_size": 63488 00:22:13.632 } 00:22:13.632 ] 00:22:13.632 }' 00:22:13.632 12:21:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:13.632 12:21:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:13.632 12:21:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:13.632 12:21:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:13.632 12:21:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:22:13.632 12:21:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:22:13.632 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:22:13.632 12:21:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:22:13.632 12:21:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:22:13.632 12:21:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=688 00:22:13.632 12:21:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:13.632 12:21:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:13.632 12:21:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:13.632 12:21:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:13.632 12:21:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:13.632 12:21:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:13.632 12:21:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:13.632 12:21:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.632 12:21:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:13.632 12:21:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:13.632 12:21:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.891 12:21:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:13.891 "name": "raid_bdev1", 00:22:13.891 "uuid": "77b4891a-6486-4810-888c-1f45d89c390f", 00:22:13.891 "strip_size_kb": 64, 00:22:13.891 "state": "online", 00:22:13.891 "raid_level": "raid5f", 00:22:13.891 "superblock": true, 00:22:13.891 "num_base_bdevs": 4, 00:22:13.891 "num_base_bdevs_discovered": 4, 00:22:13.891 "num_base_bdevs_operational": 4, 00:22:13.891 "process": { 00:22:13.891 "type": "rebuild", 00:22:13.891 "target": "spare", 00:22:13.891 "progress": { 00:22:13.891 "blocks": 21120, 00:22:13.891 "percent": 11 00:22:13.891 } 00:22:13.891 }, 00:22:13.891 "base_bdevs_list": [ 00:22:13.891 { 00:22:13.891 "name": "spare", 00:22:13.891 "uuid": "6911bf26-857f-59a1-b605-f1ad818350fe", 00:22:13.891 "is_configured": true, 00:22:13.891 "data_offset": 2048, 00:22:13.891 "data_size": 63488 00:22:13.891 }, 00:22:13.891 { 00:22:13.891 "name": "BaseBdev2", 00:22:13.891 "uuid": "43e6209a-5435-5e17-b507-96ea5793429c", 00:22:13.891 "is_configured": true, 00:22:13.891 "data_offset": 2048, 00:22:13.891 "data_size": 63488 00:22:13.891 }, 00:22:13.891 { 00:22:13.891 "name": "BaseBdev3", 00:22:13.891 "uuid": "1c671a45-e868-53c5-8e23-2a24fc333276", 00:22:13.891 "is_configured": true, 00:22:13.891 "data_offset": 2048, 00:22:13.891 "data_size": 63488 00:22:13.891 }, 00:22:13.891 { 00:22:13.891 "name": "BaseBdev4", 00:22:13.891 "uuid": "1aae62bc-3e99-5514-b80b-e34d7871dcbf", 00:22:13.891 "is_configured": true, 00:22:13.891 "data_offset": 2048, 00:22:13.891 "data_size": 63488 00:22:13.891 } 00:22:13.891 ] 00:22:13.891 }' 00:22:13.891 12:21:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:13.891 12:21:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:13.891 12:21:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:13.891 12:21:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:13.891 12:21:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:14.826 12:21:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:14.826 12:21:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:14.826 12:21:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:14.826 12:21:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:14.826 12:21:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:14.826 12:21:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:14.826 12:21:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:14.826 12:21:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:14.826 12:21:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.826 12:21:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:14.826 12:21:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.826 12:21:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:14.826 "name": "raid_bdev1", 00:22:14.826 "uuid": "77b4891a-6486-4810-888c-1f45d89c390f", 00:22:14.826 "strip_size_kb": 64, 00:22:14.826 "state": "online", 00:22:14.826 "raid_level": "raid5f", 00:22:14.826 "superblock": true, 00:22:14.826 "num_base_bdevs": 4, 00:22:14.826 "num_base_bdevs_discovered": 4, 00:22:14.826 "num_base_bdevs_operational": 4, 00:22:14.826 "process": { 00:22:14.826 "type": "rebuild", 00:22:14.826 "target": "spare", 00:22:14.826 "progress": { 00:22:14.826 "blocks": 44160, 00:22:14.826 "percent": 23 00:22:14.826 } 00:22:14.826 }, 00:22:14.826 "base_bdevs_list": [ 00:22:14.826 { 00:22:14.826 "name": "spare", 00:22:14.826 "uuid": "6911bf26-857f-59a1-b605-f1ad818350fe", 00:22:14.826 "is_configured": true, 00:22:14.826 "data_offset": 2048, 00:22:14.826 "data_size": 63488 00:22:14.826 }, 00:22:14.826 { 00:22:14.826 "name": "BaseBdev2", 00:22:14.826 "uuid": "43e6209a-5435-5e17-b507-96ea5793429c", 00:22:14.826 "is_configured": true, 00:22:14.826 "data_offset": 2048, 00:22:14.826 "data_size": 63488 00:22:14.826 }, 00:22:14.826 { 00:22:14.826 "name": "BaseBdev3", 00:22:14.826 "uuid": "1c671a45-e868-53c5-8e23-2a24fc333276", 00:22:14.826 "is_configured": true, 00:22:14.826 "data_offset": 2048, 00:22:14.826 "data_size": 63488 00:22:14.826 }, 00:22:14.826 { 00:22:14.826 "name": "BaseBdev4", 00:22:14.826 "uuid": "1aae62bc-3e99-5514-b80b-e34d7871dcbf", 00:22:14.826 "is_configured": true, 00:22:14.826 "data_offset": 2048, 00:22:14.826 "data_size": 63488 00:22:14.826 } 00:22:14.826 ] 00:22:14.826 }' 00:22:14.826 12:21:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:15.084 12:21:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:15.084 12:21:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:15.084 12:21:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:15.084 12:21:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:16.020 12:21:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:16.020 12:21:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:16.020 12:21:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:16.020 12:21:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:16.020 12:21:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:16.020 12:21:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:16.020 12:21:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:16.020 12:21:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.020 12:21:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:16.020 12:21:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:16.020 12:21:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.020 12:21:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:16.020 "name": "raid_bdev1", 00:22:16.020 "uuid": "77b4891a-6486-4810-888c-1f45d89c390f", 00:22:16.020 "strip_size_kb": 64, 00:22:16.020 "state": "online", 00:22:16.020 "raid_level": "raid5f", 00:22:16.020 "superblock": true, 00:22:16.020 "num_base_bdevs": 4, 00:22:16.020 "num_base_bdevs_discovered": 4, 00:22:16.020 "num_base_bdevs_operational": 4, 00:22:16.020 "process": { 00:22:16.020 "type": "rebuild", 00:22:16.020 "target": "spare", 00:22:16.020 "progress": { 00:22:16.020 "blocks": 65280, 00:22:16.020 "percent": 34 00:22:16.020 } 00:22:16.020 }, 00:22:16.020 "base_bdevs_list": [ 00:22:16.020 { 00:22:16.020 "name": "spare", 00:22:16.020 "uuid": "6911bf26-857f-59a1-b605-f1ad818350fe", 00:22:16.020 "is_configured": true, 00:22:16.020 "data_offset": 2048, 00:22:16.020 "data_size": 63488 00:22:16.020 }, 00:22:16.020 { 00:22:16.020 "name": "BaseBdev2", 00:22:16.020 "uuid": "43e6209a-5435-5e17-b507-96ea5793429c", 00:22:16.020 "is_configured": true, 00:22:16.020 "data_offset": 2048, 00:22:16.020 "data_size": 63488 00:22:16.020 }, 00:22:16.020 { 00:22:16.020 "name": "BaseBdev3", 00:22:16.020 "uuid": "1c671a45-e868-53c5-8e23-2a24fc333276", 00:22:16.020 "is_configured": true, 00:22:16.020 "data_offset": 2048, 00:22:16.020 "data_size": 63488 00:22:16.020 }, 00:22:16.020 { 00:22:16.020 "name": "BaseBdev4", 00:22:16.020 "uuid": "1aae62bc-3e99-5514-b80b-e34d7871dcbf", 00:22:16.020 "is_configured": true, 00:22:16.020 "data_offset": 2048, 00:22:16.020 "data_size": 63488 00:22:16.020 } 00:22:16.020 ] 00:22:16.020 }' 00:22:16.020 12:21:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:16.278 12:21:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:16.278 12:21:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:16.278 12:21:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:16.278 12:21:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:17.213 12:21:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:17.213 12:21:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:17.213 12:21:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:17.213 12:21:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:17.213 12:21:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:17.213 12:21:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:17.213 12:21:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:17.213 12:21:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:17.213 12:21:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.213 12:21:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:17.213 12:21:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.213 12:21:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:17.213 "name": "raid_bdev1", 00:22:17.213 "uuid": "77b4891a-6486-4810-888c-1f45d89c390f", 00:22:17.213 "strip_size_kb": 64, 00:22:17.213 "state": "online", 00:22:17.213 "raid_level": "raid5f", 00:22:17.213 "superblock": true, 00:22:17.213 "num_base_bdevs": 4, 00:22:17.213 "num_base_bdevs_discovered": 4, 00:22:17.213 "num_base_bdevs_operational": 4, 00:22:17.213 "process": { 00:22:17.213 "type": "rebuild", 00:22:17.213 "target": "spare", 00:22:17.213 "progress": { 00:22:17.213 "blocks": 88320, 00:22:17.213 "percent": 46 00:22:17.213 } 00:22:17.213 }, 00:22:17.213 "base_bdevs_list": [ 00:22:17.213 { 00:22:17.213 "name": "spare", 00:22:17.213 "uuid": "6911bf26-857f-59a1-b605-f1ad818350fe", 00:22:17.213 "is_configured": true, 00:22:17.213 "data_offset": 2048, 00:22:17.213 "data_size": 63488 00:22:17.213 }, 00:22:17.213 { 00:22:17.213 "name": "BaseBdev2", 00:22:17.213 "uuid": "43e6209a-5435-5e17-b507-96ea5793429c", 00:22:17.213 "is_configured": true, 00:22:17.213 "data_offset": 2048, 00:22:17.213 "data_size": 63488 00:22:17.213 }, 00:22:17.213 { 00:22:17.213 "name": "BaseBdev3", 00:22:17.213 "uuid": "1c671a45-e868-53c5-8e23-2a24fc333276", 00:22:17.213 "is_configured": true, 00:22:17.213 "data_offset": 2048, 00:22:17.213 "data_size": 63488 00:22:17.213 }, 00:22:17.213 { 00:22:17.213 "name": "BaseBdev4", 00:22:17.213 "uuid": "1aae62bc-3e99-5514-b80b-e34d7871dcbf", 00:22:17.213 "is_configured": true, 00:22:17.213 "data_offset": 2048, 00:22:17.213 "data_size": 63488 00:22:17.213 } 00:22:17.213 ] 00:22:17.213 }' 00:22:17.213 12:21:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:17.213 12:21:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:17.213 12:21:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:17.472 12:21:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:17.472 12:21:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:18.405 12:21:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:18.405 12:21:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:18.405 12:21:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:18.405 12:21:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:18.405 12:21:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:18.405 12:21:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:18.405 12:21:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:18.405 12:21:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.405 12:21:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:18.405 12:21:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:18.405 12:21:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.405 12:21:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:18.405 "name": "raid_bdev1", 00:22:18.405 "uuid": "77b4891a-6486-4810-888c-1f45d89c390f", 00:22:18.405 "strip_size_kb": 64, 00:22:18.405 "state": "online", 00:22:18.405 "raid_level": "raid5f", 00:22:18.405 "superblock": true, 00:22:18.405 "num_base_bdevs": 4, 00:22:18.405 "num_base_bdevs_discovered": 4, 00:22:18.405 "num_base_bdevs_operational": 4, 00:22:18.405 "process": { 00:22:18.405 "type": "rebuild", 00:22:18.405 "target": "spare", 00:22:18.405 "progress": { 00:22:18.405 "blocks": 109440, 00:22:18.405 "percent": 57 00:22:18.406 } 00:22:18.406 }, 00:22:18.406 "base_bdevs_list": [ 00:22:18.406 { 00:22:18.406 "name": "spare", 00:22:18.406 "uuid": "6911bf26-857f-59a1-b605-f1ad818350fe", 00:22:18.406 "is_configured": true, 00:22:18.406 "data_offset": 2048, 00:22:18.406 "data_size": 63488 00:22:18.406 }, 00:22:18.406 { 00:22:18.406 "name": "BaseBdev2", 00:22:18.406 "uuid": "43e6209a-5435-5e17-b507-96ea5793429c", 00:22:18.406 "is_configured": true, 00:22:18.406 "data_offset": 2048, 00:22:18.406 "data_size": 63488 00:22:18.406 }, 00:22:18.406 { 00:22:18.406 "name": "BaseBdev3", 00:22:18.406 "uuid": "1c671a45-e868-53c5-8e23-2a24fc333276", 00:22:18.406 "is_configured": true, 00:22:18.406 "data_offset": 2048, 00:22:18.406 "data_size": 63488 00:22:18.406 }, 00:22:18.406 { 00:22:18.406 "name": "BaseBdev4", 00:22:18.406 "uuid": "1aae62bc-3e99-5514-b80b-e34d7871dcbf", 00:22:18.406 "is_configured": true, 00:22:18.406 "data_offset": 2048, 00:22:18.406 "data_size": 63488 00:22:18.406 } 00:22:18.406 ] 00:22:18.406 }' 00:22:18.406 12:21:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:18.406 12:21:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:18.406 12:21:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:18.406 12:21:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:18.406 12:21:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:19.782 12:21:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:19.782 12:21:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:19.782 12:21:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:19.782 12:21:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:19.782 12:21:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:19.782 12:21:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:19.782 12:21:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:19.782 12:21:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.782 12:21:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:19.782 12:21:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:19.782 12:21:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.782 12:21:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:19.782 "name": "raid_bdev1", 00:22:19.782 "uuid": "77b4891a-6486-4810-888c-1f45d89c390f", 00:22:19.782 "strip_size_kb": 64, 00:22:19.782 "state": "online", 00:22:19.782 "raid_level": "raid5f", 00:22:19.782 "superblock": true, 00:22:19.782 "num_base_bdevs": 4, 00:22:19.782 "num_base_bdevs_discovered": 4, 00:22:19.782 "num_base_bdevs_operational": 4, 00:22:19.782 "process": { 00:22:19.782 "type": "rebuild", 00:22:19.782 "target": "spare", 00:22:19.782 "progress": { 00:22:19.782 "blocks": 132480, 00:22:19.782 "percent": 69 00:22:19.782 } 00:22:19.782 }, 00:22:19.782 "base_bdevs_list": [ 00:22:19.782 { 00:22:19.782 "name": "spare", 00:22:19.782 "uuid": "6911bf26-857f-59a1-b605-f1ad818350fe", 00:22:19.782 "is_configured": true, 00:22:19.782 "data_offset": 2048, 00:22:19.782 "data_size": 63488 00:22:19.782 }, 00:22:19.782 { 00:22:19.782 "name": "BaseBdev2", 00:22:19.782 "uuid": "43e6209a-5435-5e17-b507-96ea5793429c", 00:22:19.782 "is_configured": true, 00:22:19.782 "data_offset": 2048, 00:22:19.782 "data_size": 63488 00:22:19.782 }, 00:22:19.782 { 00:22:19.782 "name": "BaseBdev3", 00:22:19.782 "uuid": "1c671a45-e868-53c5-8e23-2a24fc333276", 00:22:19.782 "is_configured": true, 00:22:19.782 "data_offset": 2048, 00:22:19.782 "data_size": 63488 00:22:19.782 }, 00:22:19.782 { 00:22:19.782 "name": "BaseBdev4", 00:22:19.782 "uuid": "1aae62bc-3e99-5514-b80b-e34d7871dcbf", 00:22:19.782 "is_configured": true, 00:22:19.782 "data_offset": 2048, 00:22:19.782 "data_size": 63488 00:22:19.782 } 00:22:19.782 ] 00:22:19.782 }' 00:22:19.782 12:21:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:19.782 12:21:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:19.782 12:21:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:19.782 12:21:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:19.782 12:21:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:20.718 12:21:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:20.718 12:21:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:20.718 12:21:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:20.718 12:21:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:20.718 12:21:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:20.718 12:21:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:20.718 12:21:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:20.718 12:21:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:20.718 12:21:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.718 12:21:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:20.718 12:21:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.718 12:21:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:20.718 "name": "raid_bdev1", 00:22:20.718 "uuid": "77b4891a-6486-4810-888c-1f45d89c390f", 00:22:20.718 "strip_size_kb": 64, 00:22:20.718 "state": "online", 00:22:20.718 "raid_level": "raid5f", 00:22:20.718 "superblock": true, 00:22:20.718 "num_base_bdevs": 4, 00:22:20.718 "num_base_bdevs_discovered": 4, 00:22:20.718 "num_base_bdevs_operational": 4, 00:22:20.718 "process": { 00:22:20.718 "type": "rebuild", 00:22:20.718 "target": "spare", 00:22:20.718 "progress": { 00:22:20.718 "blocks": 153600, 00:22:20.718 "percent": 80 00:22:20.718 } 00:22:20.718 }, 00:22:20.718 "base_bdevs_list": [ 00:22:20.718 { 00:22:20.718 "name": "spare", 00:22:20.718 "uuid": "6911bf26-857f-59a1-b605-f1ad818350fe", 00:22:20.718 "is_configured": true, 00:22:20.718 "data_offset": 2048, 00:22:20.718 "data_size": 63488 00:22:20.718 }, 00:22:20.718 { 00:22:20.718 "name": "BaseBdev2", 00:22:20.718 "uuid": "43e6209a-5435-5e17-b507-96ea5793429c", 00:22:20.718 "is_configured": true, 00:22:20.718 "data_offset": 2048, 00:22:20.718 "data_size": 63488 00:22:20.718 }, 00:22:20.718 { 00:22:20.718 "name": "BaseBdev3", 00:22:20.718 "uuid": "1c671a45-e868-53c5-8e23-2a24fc333276", 00:22:20.718 "is_configured": true, 00:22:20.718 "data_offset": 2048, 00:22:20.718 "data_size": 63488 00:22:20.718 }, 00:22:20.718 { 00:22:20.718 "name": "BaseBdev4", 00:22:20.718 "uuid": "1aae62bc-3e99-5514-b80b-e34d7871dcbf", 00:22:20.718 "is_configured": true, 00:22:20.718 "data_offset": 2048, 00:22:20.718 "data_size": 63488 00:22:20.718 } 00:22:20.718 ] 00:22:20.718 }' 00:22:20.718 12:21:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:20.718 12:21:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:20.718 12:21:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:20.718 12:21:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:20.718 12:21:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:22.096 12:21:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:22.096 12:21:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:22.096 12:21:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:22.096 12:21:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:22.096 12:21:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:22.096 12:21:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:22.096 12:21:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:22.096 12:21:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:22.096 12:21:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.096 12:21:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:22.096 12:21:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.096 12:21:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:22.096 "name": "raid_bdev1", 00:22:22.096 "uuid": "77b4891a-6486-4810-888c-1f45d89c390f", 00:22:22.096 "strip_size_kb": 64, 00:22:22.096 "state": "online", 00:22:22.096 "raid_level": "raid5f", 00:22:22.096 "superblock": true, 00:22:22.096 "num_base_bdevs": 4, 00:22:22.096 "num_base_bdevs_discovered": 4, 00:22:22.096 "num_base_bdevs_operational": 4, 00:22:22.096 "process": { 00:22:22.096 "type": "rebuild", 00:22:22.096 "target": "spare", 00:22:22.096 "progress": { 00:22:22.096 "blocks": 174720, 00:22:22.096 "percent": 91 00:22:22.096 } 00:22:22.096 }, 00:22:22.096 "base_bdevs_list": [ 00:22:22.096 { 00:22:22.096 "name": "spare", 00:22:22.096 "uuid": "6911bf26-857f-59a1-b605-f1ad818350fe", 00:22:22.096 "is_configured": true, 00:22:22.096 "data_offset": 2048, 00:22:22.096 "data_size": 63488 00:22:22.096 }, 00:22:22.096 { 00:22:22.096 "name": "BaseBdev2", 00:22:22.096 "uuid": "43e6209a-5435-5e17-b507-96ea5793429c", 00:22:22.096 "is_configured": true, 00:22:22.096 "data_offset": 2048, 00:22:22.096 "data_size": 63488 00:22:22.096 }, 00:22:22.096 { 00:22:22.096 "name": "BaseBdev3", 00:22:22.096 "uuid": "1c671a45-e868-53c5-8e23-2a24fc333276", 00:22:22.096 "is_configured": true, 00:22:22.096 "data_offset": 2048, 00:22:22.096 "data_size": 63488 00:22:22.096 }, 00:22:22.096 { 00:22:22.096 "name": "BaseBdev4", 00:22:22.096 "uuid": "1aae62bc-3e99-5514-b80b-e34d7871dcbf", 00:22:22.096 "is_configured": true, 00:22:22.096 "data_offset": 2048, 00:22:22.096 "data_size": 63488 00:22:22.096 } 00:22:22.096 ] 00:22:22.096 }' 00:22:22.096 12:21:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:22.096 12:21:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:22.096 12:21:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:22.096 12:21:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:22.096 12:21:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:22.662 [2024-11-25 12:21:18.634705] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:22.662 [2024-11-25 12:21:18.634995] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:22.662 [2024-11-25 12:21:18.635197] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:22.921 12:21:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:22.921 12:21:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:22.921 12:21:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:22.921 12:21:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:22.921 12:21:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:22.921 12:21:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:22.921 12:21:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:22.921 12:21:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:22.921 12:21:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.921 12:21:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:22.921 12:21:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.194 12:21:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:23.194 "name": "raid_bdev1", 00:22:23.194 "uuid": "77b4891a-6486-4810-888c-1f45d89c390f", 00:22:23.194 "strip_size_kb": 64, 00:22:23.194 "state": "online", 00:22:23.194 "raid_level": "raid5f", 00:22:23.194 "superblock": true, 00:22:23.194 "num_base_bdevs": 4, 00:22:23.194 "num_base_bdevs_discovered": 4, 00:22:23.194 "num_base_bdevs_operational": 4, 00:22:23.194 "base_bdevs_list": [ 00:22:23.194 { 00:22:23.194 "name": "spare", 00:22:23.194 "uuid": "6911bf26-857f-59a1-b605-f1ad818350fe", 00:22:23.194 "is_configured": true, 00:22:23.194 "data_offset": 2048, 00:22:23.194 "data_size": 63488 00:22:23.194 }, 00:22:23.194 { 00:22:23.194 "name": "BaseBdev2", 00:22:23.194 "uuid": "43e6209a-5435-5e17-b507-96ea5793429c", 00:22:23.194 "is_configured": true, 00:22:23.194 "data_offset": 2048, 00:22:23.194 "data_size": 63488 00:22:23.194 }, 00:22:23.194 { 00:22:23.194 "name": "BaseBdev3", 00:22:23.194 "uuid": "1c671a45-e868-53c5-8e23-2a24fc333276", 00:22:23.194 "is_configured": true, 00:22:23.194 "data_offset": 2048, 00:22:23.194 "data_size": 63488 00:22:23.194 }, 00:22:23.194 { 00:22:23.194 "name": "BaseBdev4", 00:22:23.194 "uuid": "1aae62bc-3e99-5514-b80b-e34d7871dcbf", 00:22:23.194 "is_configured": true, 00:22:23.194 "data_offset": 2048, 00:22:23.194 "data_size": 63488 00:22:23.194 } 00:22:23.194 ] 00:22:23.194 }' 00:22:23.194 12:21:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:23.194 12:21:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:23.194 12:21:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:23.194 12:21:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:22:23.194 12:21:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:22:23.194 12:21:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:23.194 12:21:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:23.194 12:21:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:23.194 12:21:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:23.194 12:21:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:23.194 12:21:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:23.194 12:21:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:23.194 12:21:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.194 12:21:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:23.194 12:21:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.194 12:21:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:23.194 "name": "raid_bdev1", 00:22:23.194 "uuid": "77b4891a-6486-4810-888c-1f45d89c390f", 00:22:23.194 "strip_size_kb": 64, 00:22:23.194 "state": "online", 00:22:23.194 "raid_level": "raid5f", 00:22:23.194 "superblock": true, 00:22:23.195 "num_base_bdevs": 4, 00:22:23.195 "num_base_bdevs_discovered": 4, 00:22:23.195 "num_base_bdevs_operational": 4, 00:22:23.195 "base_bdevs_list": [ 00:22:23.195 { 00:22:23.195 "name": "spare", 00:22:23.195 "uuid": "6911bf26-857f-59a1-b605-f1ad818350fe", 00:22:23.195 "is_configured": true, 00:22:23.195 "data_offset": 2048, 00:22:23.195 "data_size": 63488 00:22:23.195 }, 00:22:23.195 { 00:22:23.195 "name": "BaseBdev2", 00:22:23.195 "uuid": "43e6209a-5435-5e17-b507-96ea5793429c", 00:22:23.195 "is_configured": true, 00:22:23.195 "data_offset": 2048, 00:22:23.195 "data_size": 63488 00:22:23.195 }, 00:22:23.195 { 00:22:23.195 "name": "BaseBdev3", 00:22:23.195 "uuid": "1c671a45-e868-53c5-8e23-2a24fc333276", 00:22:23.195 "is_configured": true, 00:22:23.195 "data_offset": 2048, 00:22:23.195 "data_size": 63488 00:22:23.195 }, 00:22:23.195 { 00:22:23.195 "name": "BaseBdev4", 00:22:23.195 "uuid": "1aae62bc-3e99-5514-b80b-e34d7871dcbf", 00:22:23.195 "is_configured": true, 00:22:23.195 "data_offset": 2048, 00:22:23.195 "data_size": 63488 00:22:23.195 } 00:22:23.195 ] 00:22:23.195 }' 00:22:23.195 12:21:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:23.195 12:21:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:23.195 12:21:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:23.195 12:21:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:23.195 12:21:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:22:23.195 12:21:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:23.195 12:21:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:23.195 12:21:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:23.195 12:21:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:23.195 12:21:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:23.195 12:21:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:23.195 12:21:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:23.195 12:21:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:23.195 12:21:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:23.195 12:21:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:23.195 12:21:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.195 12:21:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:23.195 12:21:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:23.195 12:21:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.454 12:21:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:23.454 "name": "raid_bdev1", 00:22:23.454 "uuid": "77b4891a-6486-4810-888c-1f45d89c390f", 00:22:23.454 "strip_size_kb": 64, 00:22:23.454 "state": "online", 00:22:23.454 "raid_level": "raid5f", 00:22:23.454 "superblock": true, 00:22:23.454 "num_base_bdevs": 4, 00:22:23.454 "num_base_bdevs_discovered": 4, 00:22:23.454 "num_base_bdevs_operational": 4, 00:22:23.454 "base_bdevs_list": [ 00:22:23.454 { 00:22:23.454 "name": "spare", 00:22:23.454 "uuid": "6911bf26-857f-59a1-b605-f1ad818350fe", 00:22:23.454 "is_configured": true, 00:22:23.454 "data_offset": 2048, 00:22:23.454 "data_size": 63488 00:22:23.454 }, 00:22:23.454 { 00:22:23.454 "name": "BaseBdev2", 00:22:23.454 "uuid": "43e6209a-5435-5e17-b507-96ea5793429c", 00:22:23.454 "is_configured": true, 00:22:23.454 "data_offset": 2048, 00:22:23.454 "data_size": 63488 00:22:23.454 }, 00:22:23.454 { 00:22:23.454 "name": "BaseBdev3", 00:22:23.454 "uuid": "1c671a45-e868-53c5-8e23-2a24fc333276", 00:22:23.454 "is_configured": true, 00:22:23.454 "data_offset": 2048, 00:22:23.454 "data_size": 63488 00:22:23.454 }, 00:22:23.454 { 00:22:23.454 "name": "BaseBdev4", 00:22:23.454 "uuid": "1aae62bc-3e99-5514-b80b-e34d7871dcbf", 00:22:23.454 "is_configured": true, 00:22:23.454 "data_offset": 2048, 00:22:23.454 "data_size": 63488 00:22:23.454 } 00:22:23.454 ] 00:22:23.454 }' 00:22:23.454 12:21:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:23.454 12:21:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:23.712 12:21:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:23.712 12:21:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.712 12:21:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:23.712 [2024-11-25 12:21:19.766609] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:23.712 [2024-11-25 12:21:19.766661] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:23.712 [2024-11-25 12:21:19.766847] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:23.712 [2024-11-25 12:21:19.767019] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:23.712 [2024-11-25 12:21:19.767050] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:23.712 12:21:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.712 12:21:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:23.712 12:21:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.712 12:21:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:23.713 12:21:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:22:23.713 12:21:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.971 12:21:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:22:23.971 12:21:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:22:23.971 12:21:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:22:23.971 12:21:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:23.971 12:21:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:22:23.971 12:21:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:22:23.971 12:21:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:23.971 12:21:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:23.971 12:21:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:23.971 12:21:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:22:23.971 12:21:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:23.971 12:21:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:23.971 12:21:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:22:24.230 /dev/nbd0 00:22:24.230 12:21:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:24.230 12:21:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:24.230 12:21:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:24.230 12:21:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:22:24.230 12:21:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:24.230 12:21:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:24.231 12:21:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:24.231 12:21:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:22:24.231 12:21:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:24.231 12:21:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:24.231 12:21:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:24.231 1+0 records in 00:22:24.231 1+0 records out 00:22:24.231 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00035911 s, 11.4 MB/s 00:22:24.231 12:21:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:24.231 12:21:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:22:24.231 12:21:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:24.231 12:21:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:24.231 12:21:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:22:24.231 12:21:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:24.231 12:21:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:24.231 12:21:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:22:24.491 /dev/nbd1 00:22:24.491 12:21:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:24.491 12:21:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:24.491 12:21:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:22:24.491 12:21:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:22:24.491 12:21:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:24.491 12:21:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:24.491 12:21:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:22:24.491 12:21:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:22:24.491 12:21:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:24.491 12:21:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:24.491 12:21:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:24.491 1+0 records in 00:22:24.491 1+0 records out 00:22:24.491 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000321119 s, 12.8 MB/s 00:22:24.491 12:21:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:24.491 12:21:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:22:24.491 12:21:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:24.491 12:21:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:24.491 12:21:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:22:24.491 12:21:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:24.491 12:21:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:24.491 12:21:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:22:24.750 12:21:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:22:24.750 12:21:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:24.750 12:21:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:24.750 12:21:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:24.750 12:21:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:22:24.750 12:21:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:24.750 12:21:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:22:25.010 12:21:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:25.010 12:21:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:25.010 12:21:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:25.010 12:21:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:25.010 12:21:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:25.010 12:21:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:25.010 12:21:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:22:25.010 12:21:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:22:25.010 12:21:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:25.010 12:21:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:22:25.269 12:21:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:25.269 12:21:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:25.269 12:21:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:25.269 12:21:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:25.269 12:21:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:25.269 12:21:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:25.269 12:21:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:22:25.269 12:21:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:22:25.269 12:21:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:22:25.269 12:21:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:22:25.269 12:21:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.269 12:21:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:25.269 12:21:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.269 12:21:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:25.269 12:21:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.269 12:21:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:25.269 [2024-11-25 12:21:21.301620] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:25.269 [2024-11-25 12:21:21.301698] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:25.269 [2024-11-25 12:21:21.301737] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:22:25.269 [2024-11-25 12:21:21.301753] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:25.269 [2024-11-25 12:21:21.304847] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:25.269 [2024-11-25 12:21:21.304895] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:25.269 [2024-11-25 12:21:21.305041] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:25.269 [2024-11-25 12:21:21.305127] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:25.269 [2024-11-25 12:21:21.305370] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:25.269 [2024-11-25 12:21:21.305522] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:25.269 [2024-11-25 12:21:21.305655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:25.269 spare 00:22:25.269 12:21:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.269 12:21:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:22:25.269 12:21:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.269 12:21:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:25.528 [2024-11-25 12:21:21.405872] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:22:25.528 [2024-11-25 12:21:21.405927] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:22:25.528 [2024-11-25 12:21:21.406417] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:22:25.528 [2024-11-25 12:21:21.412819] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:22:25.528 [2024-11-25 12:21:21.412961] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:22:25.528 [2024-11-25 12:21:21.413281] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:25.528 12:21:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.528 12:21:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:22:25.528 12:21:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:25.528 12:21:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:25.528 12:21:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:25.528 12:21:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:25.528 12:21:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:25.528 12:21:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:25.528 12:21:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:25.528 12:21:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:25.528 12:21:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:25.528 12:21:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:25.528 12:21:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:25.529 12:21:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.529 12:21:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:25.529 12:21:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.529 12:21:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:25.529 "name": "raid_bdev1", 00:22:25.529 "uuid": "77b4891a-6486-4810-888c-1f45d89c390f", 00:22:25.529 "strip_size_kb": 64, 00:22:25.529 "state": "online", 00:22:25.529 "raid_level": "raid5f", 00:22:25.529 "superblock": true, 00:22:25.529 "num_base_bdevs": 4, 00:22:25.529 "num_base_bdevs_discovered": 4, 00:22:25.529 "num_base_bdevs_operational": 4, 00:22:25.529 "base_bdevs_list": [ 00:22:25.529 { 00:22:25.529 "name": "spare", 00:22:25.529 "uuid": "6911bf26-857f-59a1-b605-f1ad818350fe", 00:22:25.529 "is_configured": true, 00:22:25.529 "data_offset": 2048, 00:22:25.529 "data_size": 63488 00:22:25.529 }, 00:22:25.529 { 00:22:25.529 "name": "BaseBdev2", 00:22:25.529 "uuid": "43e6209a-5435-5e17-b507-96ea5793429c", 00:22:25.529 "is_configured": true, 00:22:25.529 "data_offset": 2048, 00:22:25.529 "data_size": 63488 00:22:25.529 }, 00:22:25.529 { 00:22:25.529 "name": "BaseBdev3", 00:22:25.529 "uuid": "1c671a45-e868-53c5-8e23-2a24fc333276", 00:22:25.529 "is_configured": true, 00:22:25.529 "data_offset": 2048, 00:22:25.529 "data_size": 63488 00:22:25.529 }, 00:22:25.529 { 00:22:25.529 "name": "BaseBdev4", 00:22:25.529 "uuid": "1aae62bc-3e99-5514-b80b-e34d7871dcbf", 00:22:25.529 "is_configured": true, 00:22:25.529 "data_offset": 2048, 00:22:25.529 "data_size": 63488 00:22:25.529 } 00:22:25.529 ] 00:22:25.529 }' 00:22:25.529 12:21:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:25.529 12:21:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:26.096 12:21:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:26.096 12:21:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:26.096 12:21:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:26.096 12:21:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:26.096 12:21:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:26.096 12:21:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:26.096 12:21:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:26.096 12:21:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.096 12:21:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:26.096 12:21:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.096 12:21:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:26.096 "name": "raid_bdev1", 00:22:26.096 "uuid": "77b4891a-6486-4810-888c-1f45d89c390f", 00:22:26.096 "strip_size_kb": 64, 00:22:26.096 "state": "online", 00:22:26.096 "raid_level": "raid5f", 00:22:26.096 "superblock": true, 00:22:26.096 "num_base_bdevs": 4, 00:22:26.096 "num_base_bdevs_discovered": 4, 00:22:26.096 "num_base_bdevs_operational": 4, 00:22:26.096 "base_bdevs_list": [ 00:22:26.096 { 00:22:26.096 "name": "spare", 00:22:26.096 "uuid": "6911bf26-857f-59a1-b605-f1ad818350fe", 00:22:26.096 "is_configured": true, 00:22:26.096 "data_offset": 2048, 00:22:26.096 "data_size": 63488 00:22:26.096 }, 00:22:26.096 { 00:22:26.096 "name": "BaseBdev2", 00:22:26.096 "uuid": "43e6209a-5435-5e17-b507-96ea5793429c", 00:22:26.096 "is_configured": true, 00:22:26.096 "data_offset": 2048, 00:22:26.096 "data_size": 63488 00:22:26.096 }, 00:22:26.096 { 00:22:26.096 "name": "BaseBdev3", 00:22:26.096 "uuid": "1c671a45-e868-53c5-8e23-2a24fc333276", 00:22:26.096 "is_configured": true, 00:22:26.096 "data_offset": 2048, 00:22:26.096 "data_size": 63488 00:22:26.096 }, 00:22:26.096 { 00:22:26.096 "name": "BaseBdev4", 00:22:26.096 "uuid": "1aae62bc-3e99-5514-b80b-e34d7871dcbf", 00:22:26.096 "is_configured": true, 00:22:26.096 "data_offset": 2048, 00:22:26.096 "data_size": 63488 00:22:26.096 } 00:22:26.096 ] 00:22:26.096 }' 00:22:26.096 12:21:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:26.096 12:21:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:26.096 12:21:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:26.096 12:21:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:26.096 12:21:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:26.096 12:21:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.096 12:21:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:26.096 12:21:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:22:26.096 12:21:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.096 12:21:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:22:26.096 12:21:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:22:26.096 12:21:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.096 12:21:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:26.096 [2024-11-25 12:21:22.108851] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:26.096 12:21:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.096 12:21:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:26.096 12:21:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:26.096 12:21:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:26.096 12:21:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:26.096 12:21:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:26.096 12:21:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:26.096 12:21:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:26.096 12:21:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:26.096 12:21:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:26.096 12:21:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:26.096 12:21:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:26.096 12:21:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.096 12:21:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:26.096 12:21:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:26.096 12:21:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.096 12:21:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:26.096 "name": "raid_bdev1", 00:22:26.096 "uuid": "77b4891a-6486-4810-888c-1f45d89c390f", 00:22:26.096 "strip_size_kb": 64, 00:22:26.096 "state": "online", 00:22:26.096 "raid_level": "raid5f", 00:22:26.096 "superblock": true, 00:22:26.096 "num_base_bdevs": 4, 00:22:26.096 "num_base_bdevs_discovered": 3, 00:22:26.097 "num_base_bdevs_operational": 3, 00:22:26.097 "base_bdevs_list": [ 00:22:26.097 { 00:22:26.097 "name": null, 00:22:26.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:26.097 "is_configured": false, 00:22:26.097 "data_offset": 0, 00:22:26.097 "data_size": 63488 00:22:26.097 }, 00:22:26.097 { 00:22:26.097 "name": "BaseBdev2", 00:22:26.097 "uuid": "43e6209a-5435-5e17-b507-96ea5793429c", 00:22:26.097 "is_configured": true, 00:22:26.097 "data_offset": 2048, 00:22:26.097 "data_size": 63488 00:22:26.097 }, 00:22:26.097 { 00:22:26.097 "name": "BaseBdev3", 00:22:26.097 "uuid": "1c671a45-e868-53c5-8e23-2a24fc333276", 00:22:26.097 "is_configured": true, 00:22:26.097 "data_offset": 2048, 00:22:26.097 "data_size": 63488 00:22:26.097 }, 00:22:26.097 { 00:22:26.097 "name": "BaseBdev4", 00:22:26.097 "uuid": "1aae62bc-3e99-5514-b80b-e34d7871dcbf", 00:22:26.097 "is_configured": true, 00:22:26.097 "data_offset": 2048, 00:22:26.097 "data_size": 63488 00:22:26.097 } 00:22:26.097 ] 00:22:26.097 }' 00:22:26.097 12:21:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:26.097 12:21:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:26.663 12:21:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:26.663 12:21:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.663 12:21:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:26.663 [2024-11-25 12:21:22.605016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:26.663 [2024-11-25 12:21:22.605266] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:26.663 [2024-11-25 12:21:22.605297] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:26.663 [2024-11-25 12:21:22.605375] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:26.663 [2024-11-25 12:21:22.618663] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:22:26.663 12:21:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.663 12:21:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:22:26.663 [2024-11-25 12:21:22.627382] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:27.598 12:21:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:27.598 12:21:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:27.598 12:21:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:27.598 12:21:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:27.598 12:21:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:27.598 12:21:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:27.598 12:21:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:27.598 12:21:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.598 12:21:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:27.598 12:21:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.598 12:21:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:27.598 "name": "raid_bdev1", 00:22:27.598 "uuid": "77b4891a-6486-4810-888c-1f45d89c390f", 00:22:27.598 "strip_size_kb": 64, 00:22:27.598 "state": "online", 00:22:27.598 "raid_level": "raid5f", 00:22:27.598 "superblock": true, 00:22:27.598 "num_base_bdevs": 4, 00:22:27.598 "num_base_bdevs_discovered": 4, 00:22:27.598 "num_base_bdevs_operational": 4, 00:22:27.598 "process": { 00:22:27.598 "type": "rebuild", 00:22:27.598 "target": "spare", 00:22:27.598 "progress": { 00:22:27.598 "blocks": 17280, 00:22:27.598 "percent": 9 00:22:27.598 } 00:22:27.598 }, 00:22:27.598 "base_bdevs_list": [ 00:22:27.598 { 00:22:27.598 "name": "spare", 00:22:27.598 "uuid": "6911bf26-857f-59a1-b605-f1ad818350fe", 00:22:27.598 "is_configured": true, 00:22:27.598 "data_offset": 2048, 00:22:27.598 "data_size": 63488 00:22:27.598 }, 00:22:27.598 { 00:22:27.598 "name": "BaseBdev2", 00:22:27.598 "uuid": "43e6209a-5435-5e17-b507-96ea5793429c", 00:22:27.598 "is_configured": true, 00:22:27.598 "data_offset": 2048, 00:22:27.598 "data_size": 63488 00:22:27.598 }, 00:22:27.598 { 00:22:27.598 "name": "BaseBdev3", 00:22:27.598 "uuid": "1c671a45-e868-53c5-8e23-2a24fc333276", 00:22:27.598 "is_configured": true, 00:22:27.598 "data_offset": 2048, 00:22:27.598 "data_size": 63488 00:22:27.598 }, 00:22:27.598 { 00:22:27.598 "name": "BaseBdev4", 00:22:27.598 "uuid": "1aae62bc-3e99-5514-b80b-e34d7871dcbf", 00:22:27.598 "is_configured": true, 00:22:27.598 "data_offset": 2048, 00:22:27.598 "data_size": 63488 00:22:27.598 } 00:22:27.598 ] 00:22:27.598 }' 00:22:27.598 12:21:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:27.857 12:21:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:27.857 12:21:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:27.857 12:21:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:27.857 12:21:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:22:27.857 12:21:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.857 12:21:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:27.857 [2024-11-25 12:21:23.793278] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:27.857 [2024-11-25 12:21:23.839054] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:27.857 [2024-11-25 12:21:23.839148] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:27.857 [2024-11-25 12:21:23.839175] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:27.857 [2024-11-25 12:21:23.839190] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:27.857 12:21:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.857 12:21:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:27.857 12:21:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:27.857 12:21:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:27.857 12:21:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:27.857 12:21:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:27.857 12:21:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:27.857 12:21:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:27.857 12:21:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:27.857 12:21:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:27.857 12:21:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:27.857 12:21:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:27.857 12:21:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:27.857 12:21:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.857 12:21:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:27.857 12:21:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.857 12:21:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:27.857 "name": "raid_bdev1", 00:22:27.857 "uuid": "77b4891a-6486-4810-888c-1f45d89c390f", 00:22:27.857 "strip_size_kb": 64, 00:22:27.857 "state": "online", 00:22:27.857 "raid_level": "raid5f", 00:22:27.857 "superblock": true, 00:22:27.857 "num_base_bdevs": 4, 00:22:27.857 "num_base_bdevs_discovered": 3, 00:22:27.857 "num_base_bdevs_operational": 3, 00:22:27.857 "base_bdevs_list": [ 00:22:27.857 { 00:22:27.857 "name": null, 00:22:27.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:27.857 "is_configured": false, 00:22:27.857 "data_offset": 0, 00:22:27.857 "data_size": 63488 00:22:27.857 }, 00:22:27.857 { 00:22:27.857 "name": "BaseBdev2", 00:22:27.857 "uuid": "43e6209a-5435-5e17-b507-96ea5793429c", 00:22:27.857 "is_configured": true, 00:22:27.857 "data_offset": 2048, 00:22:27.857 "data_size": 63488 00:22:27.857 }, 00:22:27.857 { 00:22:27.857 "name": "BaseBdev3", 00:22:27.857 "uuid": "1c671a45-e868-53c5-8e23-2a24fc333276", 00:22:27.857 "is_configured": true, 00:22:27.857 "data_offset": 2048, 00:22:27.857 "data_size": 63488 00:22:27.857 }, 00:22:27.857 { 00:22:27.857 "name": "BaseBdev4", 00:22:27.857 "uuid": "1aae62bc-3e99-5514-b80b-e34d7871dcbf", 00:22:27.857 "is_configured": true, 00:22:27.857 "data_offset": 2048, 00:22:27.857 "data_size": 63488 00:22:27.857 } 00:22:27.857 ] 00:22:27.857 }' 00:22:27.857 12:21:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:27.857 12:21:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:28.424 12:21:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:28.424 12:21:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.424 12:21:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:28.424 [2024-11-25 12:21:24.382399] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:28.424 [2024-11-25 12:21:24.382492] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:28.424 [2024-11-25 12:21:24.382544] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:22:28.424 [2024-11-25 12:21:24.382564] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:28.424 [2024-11-25 12:21:24.383191] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:28.424 [2024-11-25 12:21:24.383223] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:28.424 [2024-11-25 12:21:24.383362] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:28.424 [2024-11-25 12:21:24.383400] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:28.424 [2024-11-25 12:21:24.383414] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:28.424 [2024-11-25 12:21:24.383466] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:28.424 [2024-11-25 12:21:24.397065] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:22:28.424 spare 00:22:28.424 12:21:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.424 12:21:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:22:28.424 [2024-11-25 12:21:24.405973] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:29.359 12:21:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:29.359 12:21:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:29.359 12:21:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:29.359 12:21:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:29.359 12:21:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:29.359 12:21:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:29.359 12:21:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:29.359 12:21:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.359 12:21:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:29.359 12:21:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.618 12:21:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:29.618 "name": "raid_bdev1", 00:22:29.618 "uuid": "77b4891a-6486-4810-888c-1f45d89c390f", 00:22:29.618 "strip_size_kb": 64, 00:22:29.618 "state": "online", 00:22:29.618 "raid_level": "raid5f", 00:22:29.618 "superblock": true, 00:22:29.618 "num_base_bdevs": 4, 00:22:29.618 "num_base_bdevs_discovered": 4, 00:22:29.618 "num_base_bdevs_operational": 4, 00:22:29.618 "process": { 00:22:29.618 "type": "rebuild", 00:22:29.618 "target": "spare", 00:22:29.618 "progress": { 00:22:29.618 "blocks": 17280, 00:22:29.618 "percent": 9 00:22:29.618 } 00:22:29.618 }, 00:22:29.618 "base_bdevs_list": [ 00:22:29.618 { 00:22:29.618 "name": "spare", 00:22:29.618 "uuid": "6911bf26-857f-59a1-b605-f1ad818350fe", 00:22:29.618 "is_configured": true, 00:22:29.618 "data_offset": 2048, 00:22:29.618 "data_size": 63488 00:22:29.618 }, 00:22:29.618 { 00:22:29.618 "name": "BaseBdev2", 00:22:29.618 "uuid": "43e6209a-5435-5e17-b507-96ea5793429c", 00:22:29.618 "is_configured": true, 00:22:29.618 "data_offset": 2048, 00:22:29.618 "data_size": 63488 00:22:29.618 }, 00:22:29.618 { 00:22:29.618 "name": "BaseBdev3", 00:22:29.618 "uuid": "1c671a45-e868-53c5-8e23-2a24fc333276", 00:22:29.618 "is_configured": true, 00:22:29.618 "data_offset": 2048, 00:22:29.618 "data_size": 63488 00:22:29.618 }, 00:22:29.618 { 00:22:29.618 "name": "BaseBdev4", 00:22:29.618 "uuid": "1aae62bc-3e99-5514-b80b-e34d7871dcbf", 00:22:29.618 "is_configured": true, 00:22:29.618 "data_offset": 2048, 00:22:29.618 "data_size": 63488 00:22:29.618 } 00:22:29.618 ] 00:22:29.618 }' 00:22:29.618 12:21:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:29.618 12:21:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:29.618 12:21:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:29.618 12:21:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:29.618 12:21:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:22:29.618 12:21:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.618 12:21:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:29.618 [2024-11-25 12:21:25.571525] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:29.618 [2024-11-25 12:21:25.618667] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:29.618 [2024-11-25 12:21:25.619038] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:29.618 [2024-11-25 12:21:25.619077] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:29.618 [2024-11-25 12:21:25.619091] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:29.618 12:21:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.618 12:21:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:29.618 12:21:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:29.618 12:21:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:29.618 12:21:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:29.618 12:21:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:29.618 12:21:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:29.618 12:21:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:29.619 12:21:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:29.619 12:21:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:29.619 12:21:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:29.619 12:21:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:29.619 12:21:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:29.619 12:21:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.619 12:21:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:29.619 12:21:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.619 12:21:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:29.619 "name": "raid_bdev1", 00:22:29.619 "uuid": "77b4891a-6486-4810-888c-1f45d89c390f", 00:22:29.619 "strip_size_kb": 64, 00:22:29.619 "state": "online", 00:22:29.619 "raid_level": "raid5f", 00:22:29.619 "superblock": true, 00:22:29.619 "num_base_bdevs": 4, 00:22:29.619 "num_base_bdevs_discovered": 3, 00:22:29.619 "num_base_bdevs_operational": 3, 00:22:29.619 "base_bdevs_list": [ 00:22:29.619 { 00:22:29.619 "name": null, 00:22:29.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:29.619 "is_configured": false, 00:22:29.619 "data_offset": 0, 00:22:29.619 "data_size": 63488 00:22:29.619 }, 00:22:29.619 { 00:22:29.619 "name": "BaseBdev2", 00:22:29.619 "uuid": "43e6209a-5435-5e17-b507-96ea5793429c", 00:22:29.619 "is_configured": true, 00:22:29.619 "data_offset": 2048, 00:22:29.619 "data_size": 63488 00:22:29.619 }, 00:22:29.619 { 00:22:29.619 "name": "BaseBdev3", 00:22:29.619 "uuid": "1c671a45-e868-53c5-8e23-2a24fc333276", 00:22:29.619 "is_configured": true, 00:22:29.619 "data_offset": 2048, 00:22:29.619 "data_size": 63488 00:22:29.619 }, 00:22:29.619 { 00:22:29.619 "name": "BaseBdev4", 00:22:29.619 "uuid": "1aae62bc-3e99-5514-b80b-e34d7871dcbf", 00:22:29.619 "is_configured": true, 00:22:29.619 "data_offset": 2048, 00:22:29.619 "data_size": 63488 00:22:29.619 } 00:22:29.619 ] 00:22:29.619 }' 00:22:29.619 12:21:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:29.619 12:21:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:30.185 12:21:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:30.185 12:21:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:30.185 12:21:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:30.185 12:21:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:30.185 12:21:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:30.185 12:21:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:30.185 12:21:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.185 12:21:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:30.185 12:21:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:30.185 12:21:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.185 12:21:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:30.185 "name": "raid_bdev1", 00:22:30.185 "uuid": "77b4891a-6486-4810-888c-1f45d89c390f", 00:22:30.185 "strip_size_kb": 64, 00:22:30.185 "state": "online", 00:22:30.185 "raid_level": "raid5f", 00:22:30.185 "superblock": true, 00:22:30.185 "num_base_bdevs": 4, 00:22:30.185 "num_base_bdevs_discovered": 3, 00:22:30.185 "num_base_bdevs_operational": 3, 00:22:30.185 "base_bdevs_list": [ 00:22:30.185 { 00:22:30.185 "name": null, 00:22:30.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:30.185 "is_configured": false, 00:22:30.186 "data_offset": 0, 00:22:30.186 "data_size": 63488 00:22:30.186 }, 00:22:30.186 { 00:22:30.186 "name": "BaseBdev2", 00:22:30.186 "uuid": "43e6209a-5435-5e17-b507-96ea5793429c", 00:22:30.186 "is_configured": true, 00:22:30.186 "data_offset": 2048, 00:22:30.186 "data_size": 63488 00:22:30.186 }, 00:22:30.186 { 00:22:30.186 "name": "BaseBdev3", 00:22:30.186 "uuid": "1c671a45-e868-53c5-8e23-2a24fc333276", 00:22:30.186 "is_configured": true, 00:22:30.186 "data_offset": 2048, 00:22:30.186 "data_size": 63488 00:22:30.186 }, 00:22:30.186 { 00:22:30.186 "name": "BaseBdev4", 00:22:30.186 "uuid": "1aae62bc-3e99-5514-b80b-e34d7871dcbf", 00:22:30.186 "is_configured": true, 00:22:30.186 "data_offset": 2048, 00:22:30.186 "data_size": 63488 00:22:30.186 } 00:22:30.186 ] 00:22:30.186 }' 00:22:30.186 12:21:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:30.186 12:21:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:30.186 12:21:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:30.444 12:21:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:30.444 12:21:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:22:30.444 12:21:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.444 12:21:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:30.444 12:21:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.444 12:21:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:30.444 12:21:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.444 12:21:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:30.444 [2024-11-25 12:21:26.325950] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:30.444 [2024-11-25 12:21:26.326070] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:30.444 [2024-11-25 12:21:26.326115] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:22:30.444 [2024-11-25 12:21:26.326134] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:30.444 [2024-11-25 12:21:26.326966] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:30.444 [2024-11-25 12:21:26.327021] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:30.444 [2024-11-25 12:21:26.327159] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:22:30.444 [2024-11-25 12:21:26.327187] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:30.444 [2024-11-25 12:21:26.327210] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:30.444 [2024-11-25 12:21:26.327228] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:22:30.444 BaseBdev1 00:22:30.444 12:21:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.444 12:21:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:22:31.381 12:21:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:31.381 12:21:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:31.381 12:21:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:31.381 12:21:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:31.381 12:21:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:31.381 12:21:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:31.381 12:21:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:31.381 12:21:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:31.381 12:21:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:31.381 12:21:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:31.381 12:21:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:31.381 12:21:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:31.381 12:21:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.381 12:21:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:31.381 12:21:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.381 12:21:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:31.381 "name": "raid_bdev1", 00:22:31.381 "uuid": "77b4891a-6486-4810-888c-1f45d89c390f", 00:22:31.381 "strip_size_kb": 64, 00:22:31.381 "state": "online", 00:22:31.381 "raid_level": "raid5f", 00:22:31.381 "superblock": true, 00:22:31.381 "num_base_bdevs": 4, 00:22:31.381 "num_base_bdevs_discovered": 3, 00:22:31.381 "num_base_bdevs_operational": 3, 00:22:31.381 "base_bdevs_list": [ 00:22:31.381 { 00:22:31.381 "name": null, 00:22:31.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:31.381 "is_configured": false, 00:22:31.381 "data_offset": 0, 00:22:31.381 "data_size": 63488 00:22:31.381 }, 00:22:31.381 { 00:22:31.381 "name": "BaseBdev2", 00:22:31.381 "uuid": "43e6209a-5435-5e17-b507-96ea5793429c", 00:22:31.381 "is_configured": true, 00:22:31.381 "data_offset": 2048, 00:22:31.381 "data_size": 63488 00:22:31.381 }, 00:22:31.381 { 00:22:31.381 "name": "BaseBdev3", 00:22:31.381 "uuid": "1c671a45-e868-53c5-8e23-2a24fc333276", 00:22:31.381 "is_configured": true, 00:22:31.381 "data_offset": 2048, 00:22:31.381 "data_size": 63488 00:22:31.381 }, 00:22:31.381 { 00:22:31.381 "name": "BaseBdev4", 00:22:31.381 "uuid": "1aae62bc-3e99-5514-b80b-e34d7871dcbf", 00:22:31.381 "is_configured": true, 00:22:31.381 "data_offset": 2048, 00:22:31.381 "data_size": 63488 00:22:31.381 } 00:22:31.381 ] 00:22:31.381 }' 00:22:31.381 12:21:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:31.381 12:21:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:31.949 12:21:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:31.949 12:21:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:31.949 12:21:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:31.949 12:21:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:31.949 12:21:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:31.949 12:21:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:31.949 12:21:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:31.949 12:21:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.949 12:21:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:31.949 12:21:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.949 12:21:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:31.949 "name": "raid_bdev1", 00:22:31.949 "uuid": "77b4891a-6486-4810-888c-1f45d89c390f", 00:22:31.949 "strip_size_kb": 64, 00:22:31.949 "state": "online", 00:22:31.949 "raid_level": "raid5f", 00:22:31.949 "superblock": true, 00:22:31.949 "num_base_bdevs": 4, 00:22:31.949 "num_base_bdevs_discovered": 3, 00:22:31.949 "num_base_bdevs_operational": 3, 00:22:31.949 "base_bdevs_list": [ 00:22:31.949 { 00:22:31.949 "name": null, 00:22:31.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:31.949 "is_configured": false, 00:22:31.949 "data_offset": 0, 00:22:31.949 "data_size": 63488 00:22:31.949 }, 00:22:31.949 { 00:22:31.949 "name": "BaseBdev2", 00:22:31.949 "uuid": "43e6209a-5435-5e17-b507-96ea5793429c", 00:22:31.949 "is_configured": true, 00:22:31.949 "data_offset": 2048, 00:22:31.949 "data_size": 63488 00:22:31.949 }, 00:22:31.949 { 00:22:31.949 "name": "BaseBdev3", 00:22:31.949 "uuid": "1c671a45-e868-53c5-8e23-2a24fc333276", 00:22:31.949 "is_configured": true, 00:22:31.949 "data_offset": 2048, 00:22:31.949 "data_size": 63488 00:22:31.949 }, 00:22:31.949 { 00:22:31.949 "name": "BaseBdev4", 00:22:31.949 "uuid": "1aae62bc-3e99-5514-b80b-e34d7871dcbf", 00:22:31.949 "is_configured": true, 00:22:31.949 "data_offset": 2048, 00:22:31.949 "data_size": 63488 00:22:31.949 } 00:22:31.949 ] 00:22:31.949 }' 00:22:31.949 12:21:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:31.949 12:21:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:31.949 12:21:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:31.949 12:21:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:31.949 12:21:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:31.949 12:21:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:22:31.949 12:21:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:31.949 12:21:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:31.949 12:21:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:31.949 12:21:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:31.949 12:21:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:31.949 12:21:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:31.949 12:21:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.949 12:21:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:31.949 [2024-11-25 12:21:27.962512] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:31.949 [2024-11-25 12:21:27.962853] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:31.949 [2024-11-25 12:21:27.962890] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:31.949 request: 00:22:31.949 { 00:22:31.949 "base_bdev": "BaseBdev1", 00:22:31.949 "raid_bdev": "raid_bdev1", 00:22:31.949 "method": "bdev_raid_add_base_bdev", 00:22:31.949 "req_id": 1 00:22:31.949 } 00:22:31.949 Got JSON-RPC error response 00:22:31.949 response: 00:22:31.949 { 00:22:31.949 "code": -22, 00:22:31.949 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:22:31.949 } 00:22:31.949 12:21:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:31.949 12:21:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:22:31.949 12:21:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:31.949 12:21:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:31.949 12:21:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:31.949 12:21:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:22:33.323 12:21:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:33.323 12:21:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:33.323 12:21:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:33.323 12:21:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:33.323 12:21:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:33.323 12:21:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:33.323 12:21:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:33.323 12:21:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:33.323 12:21:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:33.323 12:21:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:33.323 12:21:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:33.323 12:21:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:33.323 12:21:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.323 12:21:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:33.323 12:21:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.323 12:21:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:33.323 "name": "raid_bdev1", 00:22:33.323 "uuid": "77b4891a-6486-4810-888c-1f45d89c390f", 00:22:33.323 "strip_size_kb": 64, 00:22:33.323 "state": "online", 00:22:33.323 "raid_level": "raid5f", 00:22:33.323 "superblock": true, 00:22:33.323 "num_base_bdevs": 4, 00:22:33.323 "num_base_bdevs_discovered": 3, 00:22:33.323 "num_base_bdevs_operational": 3, 00:22:33.323 "base_bdevs_list": [ 00:22:33.323 { 00:22:33.323 "name": null, 00:22:33.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:33.323 "is_configured": false, 00:22:33.323 "data_offset": 0, 00:22:33.323 "data_size": 63488 00:22:33.323 }, 00:22:33.323 { 00:22:33.323 "name": "BaseBdev2", 00:22:33.323 "uuid": "43e6209a-5435-5e17-b507-96ea5793429c", 00:22:33.323 "is_configured": true, 00:22:33.323 "data_offset": 2048, 00:22:33.323 "data_size": 63488 00:22:33.323 }, 00:22:33.323 { 00:22:33.323 "name": "BaseBdev3", 00:22:33.323 "uuid": "1c671a45-e868-53c5-8e23-2a24fc333276", 00:22:33.323 "is_configured": true, 00:22:33.323 "data_offset": 2048, 00:22:33.323 "data_size": 63488 00:22:33.323 }, 00:22:33.323 { 00:22:33.323 "name": "BaseBdev4", 00:22:33.323 "uuid": "1aae62bc-3e99-5514-b80b-e34d7871dcbf", 00:22:33.323 "is_configured": true, 00:22:33.323 "data_offset": 2048, 00:22:33.323 "data_size": 63488 00:22:33.323 } 00:22:33.323 ] 00:22:33.323 }' 00:22:33.323 12:21:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:33.323 12:21:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:33.582 12:21:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:33.582 12:21:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:33.582 12:21:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:33.582 12:21:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:33.582 12:21:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:33.582 12:21:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:33.582 12:21:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.582 12:21:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:33.582 12:21:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:33.582 12:21:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.582 12:21:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:33.582 "name": "raid_bdev1", 00:22:33.582 "uuid": "77b4891a-6486-4810-888c-1f45d89c390f", 00:22:33.582 "strip_size_kb": 64, 00:22:33.582 "state": "online", 00:22:33.582 "raid_level": "raid5f", 00:22:33.582 "superblock": true, 00:22:33.582 "num_base_bdevs": 4, 00:22:33.582 "num_base_bdevs_discovered": 3, 00:22:33.582 "num_base_bdevs_operational": 3, 00:22:33.582 "base_bdevs_list": [ 00:22:33.582 { 00:22:33.582 "name": null, 00:22:33.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:33.582 "is_configured": false, 00:22:33.582 "data_offset": 0, 00:22:33.582 "data_size": 63488 00:22:33.582 }, 00:22:33.582 { 00:22:33.582 "name": "BaseBdev2", 00:22:33.582 "uuid": "43e6209a-5435-5e17-b507-96ea5793429c", 00:22:33.582 "is_configured": true, 00:22:33.582 "data_offset": 2048, 00:22:33.582 "data_size": 63488 00:22:33.582 }, 00:22:33.582 { 00:22:33.582 "name": "BaseBdev3", 00:22:33.582 "uuid": "1c671a45-e868-53c5-8e23-2a24fc333276", 00:22:33.582 "is_configured": true, 00:22:33.582 "data_offset": 2048, 00:22:33.582 "data_size": 63488 00:22:33.582 }, 00:22:33.582 { 00:22:33.582 "name": "BaseBdev4", 00:22:33.582 "uuid": "1aae62bc-3e99-5514-b80b-e34d7871dcbf", 00:22:33.582 "is_configured": true, 00:22:33.582 "data_offset": 2048, 00:22:33.582 "data_size": 63488 00:22:33.582 } 00:22:33.582 ] 00:22:33.582 }' 00:22:33.582 12:21:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:33.582 12:21:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:33.582 12:21:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:33.582 12:21:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:33.582 12:21:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85517 00:22:33.582 12:21:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 85517 ']' 00:22:33.582 12:21:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 85517 00:22:33.582 12:21:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:22:33.582 12:21:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:33.582 12:21:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85517 00:22:33.582 killing process with pid 85517 00:22:33.582 Received shutdown signal, test time was about 60.000000 seconds 00:22:33.582 00:22:33.582 Latency(us) 00:22:33.582 [2024-11-25T12:21:29.673Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:33.582 [2024-11-25T12:21:29.673Z] =================================================================================================================== 00:22:33.582 [2024-11-25T12:21:29.673Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:33.582 12:21:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:33.582 12:21:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:33.582 12:21:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85517' 00:22:33.582 12:21:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 85517 00:22:33.582 12:21:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 85517 00:22:33.582 [2024-11-25 12:21:29.670869] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:33.582 [2024-11-25 12:21:29.671192] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:33.582 [2024-11-25 12:21:29.671316] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:33.582 [2024-11-25 12:21:29.671359] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:22:34.148 [2024-11-25 12:21:30.123357] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:35.083 ************************************ 00:22:35.083 END TEST raid5f_rebuild_test_sb 00:22:35.083 ************************************ 00:22:35.083 12:21:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:22:35.083 00:22:35.083 real 0m28.473s 00:22:35.083 user 0m37.119s 00:22:35.083 sys 0m2.717s 00:22:35.083 12:21:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:35.083 12:21:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:35.342 12:21:31 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:22:35.342 12:21:31 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:22:35.342 12:21:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:35.342 12:21:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:35.342 12:21:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:35.342 ************************************ 00:22:35.342 START TEST raid_state_function_test_sb_4k 00:22:35.342 ************************************ 00:22:35.342 12:21:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:22:35.342 12:21:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:22:35.342 12:21:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:22:35.342 12:21:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:22:35.342 12:21:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:22:35.342 12:21:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:22:35.342 12:21:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:35.342 12:21:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:22:35.342 12:21:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:35.342 12:21:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:35.342 12:21:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:22:35.342 12:21:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:35.342 12:21:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:35.342 12:21:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:35.342 12:21:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:22:35.342 12:21:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:22:35.342 12:21:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:22:35.342 12:21:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:22:35.342 12:21:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:22:35.342 12:21:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:22:35.342 12:21:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:22:35.342 12:21:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:22:35.342 12:21:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:22:35.342 Process raid pid: 86338 00:22:35.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:35.342 12:21:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=86338 00:22:35.342 12:21:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:22:35.342 12:21:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86338' 00:22:35.342 12:21:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 86338 00:22:35.342 12:21:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86338 ']' 00:22:35.342 12:21:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:35.342 12:21:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:35.342 12:21:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:35.342 12:21:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:35.342 12:21:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:35.342 [2024-11-25 12:21:31.359212] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:22:35.342 [2024-11-25 12:21:31.360265] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:35.600 [2024-11-25 12:21:31.546020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:35.600 [2024-11-25 12:21:31.678790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:35.858 [2024-11-25 12:21:31.893718] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:35.858 [2024-11-25 12:21:31.893769] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:36.425 12:21:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:36.425 12:21:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:22:36.425 12:21:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:22:36.425 12:21:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.425 12:21:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:36.425 [2024-11-25 12:21:32.247325] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:36.425 [2024-11-25 12:21:32.247527] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:36.425 [2024-11-25 12:21:32.247653] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:36.425 [2024-11-25 12:21:32.247715] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:36.425 12:21:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.425 12:21:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:36.425 12:21:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:36.425 12:21:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:36.425 12:21:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:36.425 12:21:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:36.425 12:21:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:36.425 12:21:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:36.425 12:21:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:36.425 12:21:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:36.425 12:21:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:36.425 12:21:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:36.425 12:21:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:36.425 12:21:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.425 12:21:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:36.425 12:21:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.425 12:21:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:36.425 "name": "Existed_Raid", 00:22:36.425 "uuid": "ba74f294-4869-439d-9431-636323c4518e", 00:22:36.425 "strip_size_kb": 0, 00:22:36.425 "state": "configuring", 00:22:36.425 "raid_level": "raid1", 00:22:36.425 "superblock": true, 00:22:36.425 "num_base_bdevs": 2, 00:22:36.425 "num_base_bdevs_discovered": 0, 00:22:36.425 "num_base_bdevs_operational": 2, 00:22:36.425 "base_bdevs_list": [ 00:22:36.425 { 00:22:36.425 "name": "BaseBdev1", 00:22:36.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:36.425 "is_configured": false, 00:22:36.425 "data_offset": 0, 00:22:36.425 "data_size": 0 00:22:36.425 }, 00:22:36.425 { 00:22:36.425 "name": "BaseBdev2", 00:22:36.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:36.425 "is_configured": false, 00:22:36.425 "data_offset": 0, 00:22:36.425 "data_size": 0 00:22:36.425 } 00:22:36.425 ] 00:22:36.425 }' 00:22:36.425 12:21:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:36.425 12:21:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:36.683 12:21:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:36.683 12:21:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.683 12:21:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:36.683 [2024-11-25 12:21:32.747439] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:36.683 [2024-11-25 12:21:32.747481] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:22:36.683 12:21:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.683 12:21:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:22:36.683 12:21:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.683 12:21:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:36.683 [2024-11-25 12:21:32.755426] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:36.683 [2024-11-25 12:21:32.755501] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:36.683 [2024-11-25 12:21:32.755519] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:36.683 [2024-11-25 12:21:32.755540] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:36.683 12:21:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.683 12:21:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:22:36.683 12:21:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.683 12:21:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:36.942 [2024-11-25 12:21:32.802707] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:36.942 BaseBdev1 00:22:36.942 12:21:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.942 12:21:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:22:36.942 12:21:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:22:36.942 12:21:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:36.942 12:21:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:22:36.942 12:21:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:36.942 12:21:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:36.942 12:21:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:36.942 12:21:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.942 12:21:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:36.942 12:21:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.942 12:21:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:36.942 12:21:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.942 12:21:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:36.942 [ 00:22:36.942 { 00:22:36.942 "name": "BaseBdev1", 00:22:36.942 "aliases": [ 00:22:36.942 "64720183-24f1-45b7-8ee8-24efd6f55209" 00:22:36.942 ], 00:22:36.942 "product_name": "Malloc disk", 00:22:36.942 "block_size": 4096, 00:22:36.942 "num_blocks": 8192, 00:22:36.942 "uuid": "64720183-24f1-45b7-8ee8-24efd6f55209", 00:22:36.942 "assigned_rate_limits": { 00:22:36.942 "rw_ios_per_sec": 0, 00:22:36.942 "rw_mbytes_per_sec": 0, 00:22:36.942 "r_mbytes_per_sec": 0, 00:22:36.942 "w_mbytes_per_sec": 0 00:22:36.942 }, 00:22:36.942 "claimed": true, 00:22:36.942 "claim_type": "exclusive_write", 00:22:36.942 "zoned": false, 00:22:36.942 "supported_io_types": { 00:22:36.942 "read": true, 00:22:36.942 "write": true, 00:22:36.942 "unmap": true, 00:22:36.942 "flush": true, 00:22:36.942 "reset": true, 00:22:36.942 "nvme_admin": false, 00:22:36.942 "nvme_io": false, 00:22:36.942 "nvme_io_md": false, 00:22:36.942 "write_zeroes": true, 00:22:36.942 "zcopy": true, 00:22:36.942 "get_zone_info": false, 00:22:36.942 "zone_management": false, 00:22:36.942 "zone_append": false, 00:22:36.942 "compare": false, 00:22:36.942 "compare_and_write": false, 00:22:36.942 "abort": true, 00:22:36.942 "seek_hole": false, 00:22:36.942 "seek_data": false, 00:22:36.942 "copy": true, 00:22:36.942 "nvme_iov_md": false 00:22:36.942 }, 00:22:36.942 "memory_domains": [ 00:22:36.942 { 00:22:36.942 "dma_device_id": "system", 00:22:36.942 "dma_device_type": 1 00:22:36.942 }, 00:22:36.942 { 00:22:36.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:36.942 "dma_device_type": 2 00:22:36.942 } 00:22:36.942 ], 00:22:36.942 "driver_specific": {} 00:22:36.942 } 00:22:36.942 ] 00:22:36.942 12:21:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.942 12:21:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:22:36.942 12:21:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:36.942 12:21:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:36.942 12:21:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:36.942 12:21:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:36.942 12:21:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:36.942 12:21:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:36.942 12:21:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:36.942 12:21:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:36.942 12:21:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:36.942 12:21:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:36.942 12:21:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:36.943 12:21:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.943 12:21:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:36.943 12:21:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:36.943 12:21:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.943 12:21:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:36.943 "name": "Existed_Raid", 00:22:36.943 "uuid": "f913bc87-34c1-4602-8d6f-45fc90aea945", 00:22:36.943 "strip_size_kb": 0, 00:22:36.943 "state": "configuring", 00:22:36.943 "raid_level": "raid1", 00:22:36.943 "superblock": true, 00:22:36.943 "num_base_bdevs": 2, 00:22:36.943 "num_base_bdevs_discovered": 1, 00:22:36.943 "num_base_bdevs_operational": 2, 00:22:36.943 "base_bdevs_list": [ 00:22:36.943 { 00:22:36.943 "name": "BaseBdev1", 00:22:36.943 "uuid": "64720183-24f1-45b7-8ee8-24efd6f55209", 00:22:36.943 "is_configured": true, 00:22:36.943 "data_offset": 256, 00:22:36.943 "data_size": 7936 00:22:36.943 }, 00:22:36.943 { 00:22:36.943 "name": "BaseBdev2", 00:22:36.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:36.943 "is_configured": false, 00:22:36.943 "data_offset": 0, 00:22:36.943 "data_size": 0 00:22:36.943 } 00:22:36.943 ] 00:22:36.943 }' 00:22:36.943 12:21:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:36.943 12:21:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:37.509 12:21:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:37.509 12:21:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.509 12:21:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:37.509 [2024-11-25 12:21:33.354907] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:37.509 [2024-11-25 12:21:33.354969] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:22:37.509 12:21:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.509 12:21:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:22:37.509 12:21:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.509 12:21:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:37.509 [2024-11-25 12:21:33.366984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:37.509 [2024-11-25 12:21:33.369775] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:37.509 [2024-11-25 12:21:33.369973] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:37.509 12:21:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.509 12:21:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:22:37.510 12:21:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:37.510 12:21:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:37.510 12:21:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:37.510 12:21:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:37.510 12:21:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:37.510 12:21:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:37.510 12:21:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:37.510 12:21:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:37.510 12:21:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:37.510 12:21:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:37.510 12:21:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:37.510 12:21:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:37.510 12:21:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:37.510 12:21:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.510 12:21:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:37.510 12:21:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.510 12:21:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:37.510 "name": "Existed_Raid", 00:22:37.510 "uuid": "180b2eff-759f-4193-95d6-59205ff15b9a", 00:22:37.510 "strip_size_kb": 0, 00:22:37.510 "state": "configuring", 00:22:37.510 "raid_level": "raid1", 00:22:37.510 "superblock": true, 00:22:37.510 "num_base_bdevs": 2, 00:22:37.510 "num_base_bdevs_discovered": 1, 00:22:37.510 "num_base_bdevs_operational": 2, 00:22:37.510 "base_bdevs_list": [ 00:22:37.510 { 00:22:37.510 "name": "BaseBdev1", 00:22:37.510 "uuid": "64720183-24f1-45b7-8ee8-24efd6f55209", 00:22:37.510 "is_configured": true, 00:22:37.510 "data_offset": 256, 00:22:37.510 "data_size": 7936 00:22:37.510 }, 00:22:37.510 { 00:22:37.510 "name": "BaseBdev2", 00:22:37.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:37.510 "is_configured": false, 00:22:37.510 "data_offset": 0, 00:22:37.510 "data_size": 0 00:22:37.510 } 00:22:37.510 ] 00:22:37.510 }' 00:22:37.510 12:21:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:37.510 12:21:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:38.075 12:21:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:22:38.075 12:21:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.075 12:21:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:38.075 [2024-11-25 12:21:33.909753] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:38.075 [2024-11-25 12:21:33.910154] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:38.075 [2024-11-25 12:21:33.910181] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:38.075 BaseBdev2 00:22:38.075 [2024-11-25 12:21:33.910634] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:22:38.075 [2024-11-25 12:21:33.910880] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:38.075 [2024-11-25 12:21:33.910921] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:22:38.075 [2024-11-25 12:21:33.911104] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:38.075 12:21:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.075 12:21:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:22:38.075 12:21:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:22:38.075 12:21:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:38.075 12:21:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:22:38.075 12:21:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:38.075 12:21:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:38.075 12:21:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:38.075 12:21:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.075 12:21:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:38.075 12:21:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.075 12:21:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:38.075 12:21:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.075 12:21:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:38.075 [ 00:22:38.075 { 00:22:38.075 "name": "BaseBdev2", 00:22:38.075 "aliases": [ 00:22:38.075 "81599268-fd9a-49d3-8c8f-33b0885dd72f" 00:22:38.075 ], 00:22:38.075 "product_name": "Malloc disk", 00:22:38.075 "block_size": 4096, 00:22:38.075 "num_blocks": 8192, 00:22:38.075 "uuid": "81599268-fd9a-49d3-8c8f-33b0885dd72f", 00:22:38.075 "assigned_rate_limits": { 00:22:38.075 "rw_ios_per_sec": 0, 00:22:38.075 "rw_mbytes_per_sec": 0, 00:22:38.075 "r_mbytes_per_sec": 0, 00:22:38.075 "w_mbytes_per_sec": 0 00:22:38.075 }, 00:22:38.075 "claimed": true, 00:22:38.075 "claim_type": "exclusive_write", 00:22:38.075 "zoned": false, 00:22:38.075 "supported_io_types": { 00:22:38.075 "read": true, 00:22:38.075 "write": true, 00:22:38.075 "unmap": true, 00:22:38.075 "flush": true, 00:22:38.075 "reset": true, 00:22:38.075 "nvme_admin": false, 00:22:38.075 "nvme_io": false, 00:22:38.075 "nvme_io_md": false, 00:22:38.075 "write_zeroes": true, 00:22:38.075 "zcopy": true, 00:22:38.075 "get_zone_info": false, 00:22:38.075 "zone_management": false, 00:22:38.075 "zone_append": false, 00:22:38.075 "compare": false, 00:22:38.075 "compare_and_write": false, 00:22:38.075 "abort": true, 00:22:38.075 "seek_hole": false, 00:22:38.075 "seek_data": false, 00:22:38.075 "copy": true, 00:22:38.075 "nvme_iov_md": false 00:22:38.075 }, 00:22:38.075 "memory_domains": [ 00:22:38.075 { 00:22:38.075 "dma_device_id": "system", 00:22:38.075 "dma_device_type": 1 00:22:38.075 }, 00:22:38.075 { 00:22:38.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:38.075 "dma_device_type": 2 00:22:38.075 } 00:22:38.075 ], 00:22:38.075 "driver_specific": {} 00:22:38.075 } 00:22:38.075 ] 00:22:38.075 12:21:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.075 12:21:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:22:38.075 12:21:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:38.075 12:21:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:38.075 12:21:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:22:38.075 12:21:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:38.075 12:21:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:38.075 12:21:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:38.075 12:21:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:38.075 12:21:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:38.075 12:21:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:38.075 12:21:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:38.075 12:21:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:38.075 12:21:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:38.075 12:21:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:38.075 12:21:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:38.075 12:21:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.075 12:21:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:38.075 12:21:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.075 12:21:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:38.075 "name": "Existed_Raid", 00:22:38.076 "uuid": "180b2eff-759f-4193-95d6-59205ff15b9a", 00:22:38.076 "strip_size_kb": 0, 00:22:38.076 "state": "online", 00:22:38.076 "raid_level": "raid1", 00:22:38.076 "superblock": true, 00:22:38.076 "num_base_bdevs": 2, 00:22:38.076 "num_base_bdevs_discovered": 2, 00:22:38.076 "num_base_bdevs_operational": 2, 00:22:38.076 "base_bdevs_list": [ 00:22:38.076 { 00:22:38.076 "name": "BaseBdev1", 00:22:38.076 "uuid": "64720183-24f1-45b7-8ee8-24efd6f55209", 00:22:38.076 "is_configured": true, 00:22:38.076 "data_offset": 256, 00:22:38.076 "data_size": 7936 00:22:38.076 }, 00:22:38.076 { 00:22:38.076 "name": "BaseBdev2", 00:22:38.076 "uuid": "81599268-fd9a-49d3-8c8f-33b0885dd72f", 00:22:38.076 "is_configured": true, 00:22:38.076 "data_offset": 256, 00:22:38.076 "data_size": 7936 00:22:38.076 } 00:22:38.076 ] 00:22:38.076 }' 00:22:38.076 12:21:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:38.076 12:21:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:38.642 12:21:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:22:38.642 12:21:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:38.642 12:21:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:38.642 12:21:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:38.642 12:21:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:22:38.642 12:21:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:38.642 12:21:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:38.642 12:21:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:38.642 12:21:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.642 12:21:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:38.642 [2024-11-25 12:21:34.462287] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:38.642 12:21:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.642 12:21:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:38.642 "name": "Existed_Raid", 00:22:38.642 "aliases": [ 00:22:38.642 "180b2eff-759f-4193-95d6-59205ff15b9a" 00:22:38.642 ], 00:22:38.642 "product_name": "Raid Volume", 00:22:38.642 "block_size": 4096, 00:22:38.642 "num_blocks": 7936, 00:22:38.642 "uuid": "180b2eff-759f-4193-95d6-59205ff15b9a", 00:22:38.642 "assigned_rate_limits": { 00:22:38.642 "rw_ios_per_sec": 0, 00:22:38.642 "rw_mbytes_per_sec": 0, 00:22:38.642 "r_mbytes_per_sec": 0, 00:22:38.642 "w_mbytes_per_sec": 0 00:22:38.642 }, 00:22:38.642 "claimed": false, 00:22:38.642 "zoned": false, 00:22:38.642 "supported_io_types": { 00:22:38.642 "read": true, 00:22:38.642 "write": true, 00:22:38.642 "unmap": false, 00:22:38.642 "flush": false, 00:22:38.642 "reset": true, 00:22:38.642 "nvme_admin": false, 00:22:38.642 "nvme_io": false, 00:22:38.642 "nvme_io_md": false, 00:22:38.642 "write_zeroes": true, 00:22:38.642 "zcopy": false, 00:22:38.642 "get_zone_info": false, 00:22:38.642 "zone_management": false, 00:22:38.642 "zone_append": false, 00:22:38.642 "compare": false, 00:22:38.642 "compare_and_write": false, 00:22:38.642 "abort": false, 00:22:38.642 "seek_hole": false, 00:22:38.642 "seek_data": false, 00:22:38.642 "copy": false, 00:22:38.642 "nvme_iov_md": false 00:22:38.642 }, 00:22:38.642 "memory_domains": [ 00:22:38.642 { 00:22:38.642 "dma_device_id": "system", 00:22:38.642 "dma_device_type": 1 00:22:38.642 }, 00:22:38.642 { 00:22:38.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:38.642 "dma_device_type": 2 00:22:38.642 }, 00:22:38.642 { 00:22:38.642 "dma_device_id": "system", 00:22:38.642 "dma_device_type": 1 00:22:38.642 }, 00:22:38.642 { 00:22:38.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:38.642 "dma_device_type": 2 00:22:38.642 } 00:22:38.642 ], 00:22:38.642 "driver_specific": { 00:22:38.642 "raid": { 00:22:38.642 "uuid": "180b2eff-759f-4193-95d6-59205ff15b9a", 00:22:38.642 "strip_size_kb": 0, 00:22:38.642 "state": "online", 00:22:38.642 "raid_level": "raid1", 00:22:38.642 "superblock": true, 00:22:38.642 "num_base_bdevs": 2, 00:22:38.642 "num_base_bdevs_discovered": 2, 00:22:38.642 "num_base_bdevs_operational": 2, 00:22:38.642 "base_bdevs_list": [ 00:22:38.642 { 00:22:38.642 "name": "BaseBdev1", 00:22:38.642 "uuid": "64720183-24f1-45b7-8ee8-24efd6f55209", 00:22:38.642 "is_configured": true, 00:22:38.642 "data_offset": 256, 00:22:38.642 "data_size": 7936 00:22:38.642 }, 00:22:38.642 { 00:22:38.642 "name": "BaseBdev2", 00:22:38.642 "uuid": "81599268-fd9a-49d3-8c8f-33b0885dd72f", 00:22:38.642 "is_configured": true, 00:22:38.642 "data_offset": 256, 00:22:38.642 "data_size": 7936 00:22:38.642 } 00:22:38.642 ] 00:22:38.642 } 00:22:38.642 } 00:22:38.642 }' 00:22:38.642 12:21:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:38.642 12:21:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:22:38.642 BaseBdev2' 00:22:38.642 12:21:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:38.642 12:21:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:22:38.642 12:21:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:38.642 12:21:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:22:38.642 12:21:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.643 12:21:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:38.643 12:21:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:38.643 12:21:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.643 12:21:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:22:38.643 12:21:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:22:38.643 12:21:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:38.643 12:21:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:38.643 12:21:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.643 12:21:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:38.643 12:21:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:38.643 12:21:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.643 12:21:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:22:38.643 12:21:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:22:38.643 12:21:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:38.643 12:21:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.643 12:21:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:38.643 [2024-11-25 12:21:34.722127] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:38.969 12:21:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.969 12:21:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:22:38.969 12:21:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:22:38.969 12:21:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:38.969 12:21:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:22:38.969 12:21:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:22:38.969 12:21:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:22:38.969 12:21:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:38.969 12:21:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:38.969 12:21:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:38.969 12:21:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:38.969 12:21:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:38.969 12:21:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:38.969 12:21:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:38.969 12:21:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:38.969 12:21:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:38.969 12:21:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:38.969 12:21:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:38.969 12:21:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.970 12:21:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:38.970 12:21:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.970 12:21:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:38.970 "name": "Existed_Raid", 00:22:38.970 "uuid": "180b2eff-759f-4193-95d6-59205ff15b9a", 00:22:38.970 "strip_size_kb": 0, 00:22:38.970 "state": "online", 00:22:38.970 "raid_level": "raid1", 00:22:38.970 "superblock": true, 00:22:38.970 "num_base_bdevs": 2, 00:22:38.970 "num_base_bdevs_discovered": 1, 00:22:38.970 "num_base_bdevs_operational": 1, 00:22:38.970 "base_bdevs_list": [ 00:22:38.970 { 00:22:38.970 "name": null, 00:22:38.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:38.970 "is_configured": false, 00:22:38.970 "data_offset": 0, 00:22:38.970 "data_size": 7936 00:22:38.970 }, 00:22:38.970 { 00:22:38.970 "name": "BaseBdev2", 00:22:38.970 "uuid": "81599268-fd9a-49d3-8c8f-33b0885dd72f", 00:22:38.970 "is_configured": true, 00:22:38.970 "data_offset": 256, 00:22:38.970 "data_size": 7936 00:22:38.970 } 00:22:38.970 ] 00:22:38.970 }' 00:22:38.970 12:21:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:38.970 12:21:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:39.549 12:21:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:22:39.549 12:21:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:39.549 12:21:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:39.549 12:21:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:39.549 12:21:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.549 12:21:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:39.549 12:21:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.549 12:21:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:39.549 12:21:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:39.549 12:21:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:22:39.549 12:21:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.549 12:21:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:39.549 [2024-11-25 12:21:35.446373] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:39.549 [2024-11-25 12:21:35.446529] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:39.549 [2024-11-25 12:21:35.533816] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:39.549 [2024-11-25 12:21:35.533889] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:39.549 [2024-11-25 12:21:35.533910] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:22:39.549 12:21:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.549 12:21:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:39.549 12:21:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:39.549 12:21:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:39.549 12:21:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:22:39.549 12:21:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.549 12:21:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:39.549 12:21:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.549 12:21:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:22:39.549 12:21:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:22:39.549 12:21:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:22:39.549 12:21:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 86338 00:22:39.549 12:21:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86338 ']' 00:22:39.549 12:21:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86338 00:22:39.549 12:21:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:22:39.549 12:21:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:39.549 12:21:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86338 00:22:39.549 killing process with pid 86338 00:22:39.549 12:21:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:39.549 12:21:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:39.549 12:21:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86338' 00:22:39.549 12:21:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86338 00:22:39.549 [2024-11-25 12:21:35.617107] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:39.550 12:21:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86338 00:22:39.550 [2024-11-25 12:21:35.632284] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:40.924 12:21:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:22:40.924 00:22:40.924 real 0m5.475s 00:22:40.924 user 0m8.223s 00:22:40.924 sys 0m0.809s 00:22:40.924 12:21:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:40.924 ************************************ 00:22:40.924 END TEST raid_state_function_test_sb_4k 00:22:40.924 ************************************ 00:22:40.924 12:21:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:40.924 12:21:36 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:22:40.924 12:21:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:40.924 12:21:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:40.924 12:21:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:40.924 ************************************ 00:22:40.924 START TEST raid_superblock_test_4k 00:22:40.924 ************************************ 00:22:40.924 12:21:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:22:40.924 12:21:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:22:40.924 12:21:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:22:40.924 12:21:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:22:40.924 12:21:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:22:40.924 12:21:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:22:40.924 12:21:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:22:40.924 12:21:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:22:40.924 12:21:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:22:40.924 12:21:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:22:40.924 12:21:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:22:40.924 12:21:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:22:40.924 12:21:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:22:40.924 12:21:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:22:40.924 12:21:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:22:40.924 12:21:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:22:40.924 12:21:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86586 00:22:40.924 12:21:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86586 00:22:40.924 12:21:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:22:40.924 12:21:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 86586 ']' 00:22:40.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:40.924 12:21:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:40.924 12:21:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:40.924 12:21:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:40.924 12:21:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:40.924 12:21:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:40.924 [2024-11-25 12:21:36.840907] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:22:40.924 [2024-11-25 12:21:36.841111] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86586 ] 00:22:41.183 [2024-11-25 12:21:37.018265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.183 [2024-11-25 12:21:37.165738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:41.441 [2024-11-25 12:21:37.371660] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:41.441 [2024-11-25 12:21:37.371918] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:42.008 12:21:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:42.008 12:21:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:22:42.008 12:21:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:22:42.008 12:21:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:42.008 12:21:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:22:42.008 12:21:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:22:42.009 12:21:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:22:42.009 12:21:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:42.009 12:21:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:42.009 12:21:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:42.009 12:21:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:22:42.009 12:21:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.009 12:21:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:42.009 malloc1 00:22:42.009 12:21:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.009 12:21:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:42.009 12:21:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.009 12:21:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:42.009 [2024-11-25 12:21:37.976877] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:42.009 [2024-11-25 12:21:37.977093] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:42.009 [2024-11-25 12:21:37.977169] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:42.009 [2024-11-25 12:21:37.977293] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:42.009 [2024-11-25 12:21:37.980264] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:42.009 [2024-11-25 12:21:37.980448] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:42.009 pt1 00:22:42.009 12:21:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.009 12:21:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:42.009 12:21:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:42.009 12:21:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:22:42.009 12:21:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:22:42.009 12:21:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:22:42.009 12:21:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:42.009 12:21:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:42.009 12:21:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:42.009 12:21:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:22:42.009 12:21:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.009 12:21:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:42.009 malloc2 00:22:42.009 12:21:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.009 12:21:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:42.009 12:21:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.009 12:21:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:42.009 [2024-11-25 12:21:38.033594] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:42.009 [2024-11-25 12:21:38.033790] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:42.009 [2024-11-25 12:21:38.033831] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:42.009 [2024-11-25 12:21:38.033846] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:42.009 [2024-11-25 12:21:38.036686] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:42.009 [2024-11-25 12:21:38.036730] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:42.009 pt2 00:22:42.009 12:21:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.009 12:21:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:42.009 12:21:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:42.009 12:21:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:22:42.009 12:21:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.009 12:21:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:42.009 [2024-11-25 12:21:38.045689] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:42.009 [2024-11-25 12:21:38.048727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:42.009 [2024-11-25 12:21:38.048963] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:42.009 [2024-11-25 12:21:38.048987] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:42.009 [2024-11-25 12:21:38.049305] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:22:42.009 [2024-11-25 12:21:38.049540] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:42.009 [2024-11-25 12:21:38.049566] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:42.009 [2024-11-25 12:21:38.049793] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:42.009 12:21:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.009 12:21:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:42.009 12:21:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:42.009 12:21:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:42.009 12:21:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:42.009 12:21:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:42.009 12:21:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:42.009 12:21:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:42.009 12:21:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:42.009 12:21:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:42.009 12:21:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:42.009 12:21:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:42.009 12:21:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:42.009 12:21:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.009 12:21:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:42.009 12:21:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.268 12:21:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:42.268 "name": "raid_bdev1", 00:22:42.268 "uuid": "c87b9234-f9d9-405a-b43f-de7d83369813", 00:22:42.268 "strip_size_kb": 0, 00:22:42.268 "state": "online", 00:22:42.268 "raid_level": "raid1", 00:22:42.268 "superblock": true, 00:22:42.268 "num_base_bdevs": 2, 00:22:42.268 "num_base_bdevs_discovered": 2, 00:22:42.268 "num_base_bdevs_operational": 2, 00:22:42.268 "base_bdevs_list": [ 00:22:42.268 { 00:22:42.268 "name": "pt1", 00:22:42.268 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:42.268 "is_configured": true, 00:22:42.268 "data_offset": 256, 00:22:42.268 "data_size": 7936 00:22:42.268 }, 00:22:42.268 { 00:22:42.268 "name": "pt2", 00:22:42.268 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:42.268 "is_configured": true, 00:22:42.268 "data_offset": 256, 00:22:42.268 "data_size": 7936 00:22:42.268 } 00:22:42.268 ] 00:22:42.268 }' 00:22:42.268 12:21:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:42.268 12:21:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:42.527 12:21:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:22:42.528 12:21:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:22:42.528 12:21:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:42.528 12:21:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:42.528 12:21:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:22:42.528 12:21:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:42.528 12:21:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:42.528 12:21:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:42.528 12:21:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.528 12:21:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:42.528 [2024-11-25 12:21:38.566255] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:42.528 12:21:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.786 12:21:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:42.786 "name": "raid_bdev1", 00:22:42.786 "aliases": [ 00:22:42.786 "c87b9234-f9d9-405a-b43f-de7d83369813" 00:22:42.786 ], 00:22:42.787 "product_name": "Raid Volume", 00:22:42.787 "block_size": 4096, 00:22:42.787 "num_blocks": 7936, 00:22:42.787 "uuid": "c87b9234-f9d9-405a-b43f-de7d83369813", 00:22:42.787 "assigned_rate_limits": { 00:22:42.787 "rw_ios_per_sec": 0, 00:22:42.787 "rw_mbytes_per_sec": 0, 00:22:42.787 "r_mbytes_per_sec": 0, 00:22:42.787 "w_mbytes_per_sec": 0 00:22:42.787 }, 00:22:42.787 "claimed": false, 00:22:42.787 "zoned": false, 00:22:42.787 "supported_io_types": { 00:22:42.787 "read": true, 00:22:42.787 "write": true, 00:22:42.787 "unmap": false, 00:22:42.787 "flush": false, 00:22:42.787 "reset": true, 00:22:42.787 "nvme_admin": false, 00:22:42.787 "nvme_io": false, 00:22:42.787 "nvme_io_md": false, 00:22:42.787 "write_zeroes": true, 00:22:42.787 "zcopy": false, 00:22:42.787 "get_zone_info": false, 00:22:42.787 "zone_management": false, 00:22:42.787 "zone_append": false, 00:22:42.787 "compare": false, 00:22:42.787 "compare_and_write": false, 00:22:42.787 "abort": false, 00:22:42.787 "seek_hole": false, 00:22:42.787 "seek_data": false, 00:22:42.787 "copy": false, 00:22:42.787 "nvme_iov_md": false 00:22:42.787 }, 00:22:42.787 "memory_domains": [ 00:22:42.787 { 00:22:42.787 "dma_device_id": "system", 00:22:42.787 "dma_device_type": 1 00:22:42.787 }, 00:22:42.787 { 00:22:42.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:42.787 "dma_device_type": 2 00:22:42.787 }, 00:22:42.787 { 00:22:42.787 "dma_device_id": "system", 00:22:42.787 "dma_device_type": 1 00:22:42.787 }, 00:22:42.787 { 00:22:42.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:42.787 "dma_device_type": 2 00:22:42.787 } 00:22:42.787 ], 00:22:42.787 "driver_specific": { 00:22:42.787 "raid": { 00:22:42.787 "uuid": "c87b9234-f9d9-405a-b43f-de7d83369813", 00:22:42.787 "strip_size_kb": 0, 00:22:42.787 "state": "online", 00:22:42.787 "raid_level": "raid1", 00:22:42.787 "superblock": true, 00:22:42.787 "num_base_bdevs": 2, 00:22:42.787 "num_base_bdevs_discovered": 2, 00:22:42.787 "num_base_bdevs_operational": 2, 00:22:42.787 "base_bdevs_list": [ 00:22:42.787 { 00:22:42.787 "name": "pt1", 00:22:42.787 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:42.787 "is_configured": true, 00:22:42.787 "data_offset": 256, 00:22:42.787 "data_size": 7936 00:22:42.787 }, 00:22:42.787 { 00:22:42.787 "name": "pt2", 00:22:42.787 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:42.787 "is_configured": true, 00:22:42.787 "data_offset": 256, 00:22:42.787 "data_size": 7936 00:22:42.787 } 00:22:42.787 ] 00:22:42.787 } 00:22:42.787 } 00:22:42.787 }' 00:22:42.787 12:21:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:42.787 12:21:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:22:42.787 pt2' 00:22:42.787 12:21:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:42.787 12:21:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:22:42.787 12:21:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:42.787 12:21:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:22:42.787 12:21:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:42.787 12:21:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.787 12:21:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:42.787 12:21:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.787 12:21:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:22:42.787 12:21:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:22:42.787 12:21:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:42.787 12:21:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:22:42.787 12:21:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.787 12:21:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:42.787 12:21:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:42.787 12:21:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.787 12:21:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:22:42.787 12:21:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:22:42.787 12:21:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:42.787 12:21:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.787 12:21:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:42.787 12:21:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:22:42.787 [2024-11-25 12:21:38.858296] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:43.047 12:21:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.047 12:21:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c87b9234-f9d9-405a-b43f-de7d83369813 00:22:43.047 12:21:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z c87b9234-f9d9-405a-b43f-de7d83369813 ']' 00:22:43.047 12:21:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:43.047 12:21:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.047 12:21:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:43.047 [2024-11-25 12:21:38.909926] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:43.047 [2024-11-25 12:21:38.910073] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:43.047 [2024-11-25 12:21:38.910203] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:43.047 [2024-11-25 12:21:38.910281] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:43.047 [2024-11-25 12:21:38.910304] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:43.047 12:21:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.047 12:21:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:43.047 12:21:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:22:43.047 12:21:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.047 12:21:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:43.047 12:21:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.047 12:21:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:22:43.047 12:21:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:22:43.047 12:21:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:43.047 12:21:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:22:43.047 12:21:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.047 12:21:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:43.047 12:21:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.047 12:21:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:43.047 12:21:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:22:43.047 12:21:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.047 12:21:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:43.047 12:21:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.047 12:21:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:22:43.047 12:21:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.047 12:21:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:43.047 12:21:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:43.047 12:21:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.047 12:21:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:22:43.047 12:21:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:22:43.047 12:21:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:22:43.047 12:21:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:22:43.047 12:21:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:43.047 12:21:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:43.047 12:21:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:43.047 12:21:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:43.047 12:21:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:22:43.047 12:21:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.047 12:21:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:43.047 [2024-11-25 12:21:39.045985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:43.047 [2024-11-25 12:21:39.048574] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:43.047 [2024-11-25 12:21:39.048804] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:22:43.047 [2024-11-25 12:21:39.048907] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:22:43.047 [2024-11-25 12:21:39.048933] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:43.047 [2024-11-25 12:21:39.048948] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:22:43.047 request: 00:22:43.047 { 00:22:43.047 "name": "raid_bdev1", 00:22:43.047 "raid_level": "raid1", 00:22:43.047 "base_bdevs": [ 00:22:43.047 "malloc1", 00:22:43.047 "malloc2" 00:22:43.047 ], 00:22:43.047 "superblock": false, 00:22:43.048 "method": "bdev_raid_create", 00:22:43.048 "req_id": 1 00:22:43.048 } 00:22:43.048 Got JSON-RPC error response 00:22:43.048 response: 00:22:43.048 { 00:22:43.048 "code": -17, 00:22:43.048 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:43.048 } 00:22:43.048 12:21:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:43.048 12:21:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:22:43.048 12:21:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:43.048 12:21:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:43.048 12:21:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:43.048 12:21:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:43.048 12:21:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.048 12:21:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:22:43.048 12:21:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:43.048 12:21:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.048 12:21:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:22:43.048 12:21:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:22:43.048 12:21:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:43.048 12:21:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.048 12:21:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:43.048 [2024-11-25 12:21:39.113982] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:43.048 [2024-11-25 12:21:39.114172] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:43.048 [2024-11-25 12:21:39.114242] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:22:43.048 [2024-11-25 12:21:39.114399] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:43.048 [2024-11-25 12:21:39.117317] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:43.048 [2024-11-25 12:21:39.117500] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:43.048 [2024-11-25 12:21:39.117745] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:43.048 [2024-11-25 12:21:39.117935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:43.048 pt1 00:22:43.048 12:21:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.048 12:21:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:22:43.048 12:21:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:43.048 12:21:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:43.048 12:21:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:43.048 12:21:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:43.048 12:21:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:43.048 12:21:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:43.048 12:21:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:43.048 12:21:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:43.048 12:21:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:43.048 12:21:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:43.048 12:21:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:43.048 12:21:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.048 12:21:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:43.313 12:21:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.313 12:21:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:43.313 "name": "raid_bdev1", 00:22:43.313 "uuid": "c87b9234-f9d9-405a-b43f-de7d83369813", 00:22:43.313 "strip_size_kb": 0, 00:22:43.313 "state": "configuring", 00:22:43.313 "raid_level": "raid1", 00:22:43.313 "superblock": true, 00:22:43.314 "num_base_bdevs": 2, 00:22:43.314 "num_base_bdevs_discovered": 1, 00:22:43.314 "num_base_bdevs_operational": 2, 00:22:43.314 "base_bdevs_list": [ 00:22:43.314 { 00:22:43.314 "name": "pt1", 00:22:43.314 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:43.314 "is_configured": true, 00:22:43.314 "data_offset": 256, 00:22:43.314 "data_size": 7936 00:22:43.314 }, 00:22:43.314 { 00:22:43.314 "name": null, 00:22:43.314 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:43.314 "is_configured": false, 00:22:43.314 "data_offset": 256, 00:22:43.314 "data_size": 7936 00:22:43.314 } 00:22:43.314 ] 00:22:43.314 }' 00:22:43.314 12:21:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:43.314 12:21:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:43.572 12:21:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:22:43.572 12:21:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:22:43.572 12:21:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:43.572 12:21:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:43.572 12:21:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.572 12:21:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:43.572 [2024-11-25 12:21:39.622450] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:43.572 [2024-11-25 12:21:39.622546] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:43.572 [2024-11-25 12:21:39.622579] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:22:43.572 [2024-11-25 12:21:39.622597] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:43.572 [2024-11-25 12:21:39.623165] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:43.572 [2024-11-25 12:21:39.623203] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:43.572 [2024-11-25 12:21:39.623303] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:43.572 [2024-11-25 12:21:39.623354] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:43.572 [2024-11-25 12:21:39.623507] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:43.572 [2024-11-25 12:21:39.623538] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:43.572 [2024-11-25 12:21:39.623827] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:43.572 [2024-11-25 12:21:39.624019] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:43.572 [2024-11-25 12:21:39.624034] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:22:43.572 [2024-11-25 12:21:39.624204] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:43.572 pt2 00:22:43.572 12:21:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.572 12:21:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:22:43.572 12:21:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:43.572 12:21:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:43.572 12:21:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:43.572 12:21:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:43.572 12:21:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:43.572 12:21:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:43.572 12:21:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:43.572 12:21:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:43.572 12:21:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:43.572 12:21:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:43.572 12:21:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:43.572 12:21:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:43.572 12:21:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.572 12:21:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:43.572 12:21:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:43.572 12:21:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.832 12:21:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:43.832 "name": "raid_bdev1", 00:22:43.832 "uuid": "c87b9234-f9d9-405a-b43f-de7d83369813", 00:22:43.832 "strip_size_kb": 0, 00:22:43.832 "state": "online", 00:22:43.832 "raid_level": "raid1", 00:22:43.832 "superblock": true, 00:22:43.832 "num_base_bdevs": 2, 00:22:43.832 "num_base_bdevs_discovered": 2, 00:22:43.832 "num_base_bdevs_operational": 2, 00:22:43.832 "base_bdevs_list": [ 00:22:43.832 { 00:22:43.832 "name": "pt1", 00:22:43.832 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:43.832 "is_configured": true, 00:22:43.832 "data_offset": 256, 00:22:43.832 "data_size": 7936 00:22:43.832 }, 00:22:43.832 { 00:22:43.832 "name": "pt2", 00:22:43.832 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:43.832 "is_configured": true, 00:22:43.832 "data_offset": 256, 00:22:43.832 "data_size": 7936 00:22:43.832 } 00:22:43.832 ] 00:22:43.832 }' 00:22:43.832 12:21:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:43.832 12:21:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:44.091 12:21:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:22:44.091 12:21:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:22:44.091 12:21:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:44.091 12:21:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:44.091 12:21:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:22:44.091 12:21:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:44.091 12:21:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:44.091 12:21:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.091 12:21:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:44.091 12:21:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:44.091 [2024-11-25 12:21:40.162911] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:44.350 12:21:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.350 12:21:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:44.351 "name": "raid_bdev1", 00:22:44.351 "aliases": [ 00:22:44.351 "c87b9234-f9d9-405a-b43f-de7d83369813" 00:22:44.351 ], 00:22:44.351 "product_name": "Raid Volume", 00:22:44.351 "block_size": 4096, 00:22:44.351 "num_blocks": 7936, 00:22:44.351 "uuid": "c87b9234-f9d9-405a-b43f-de7d83369813", 00:22:44.351 "assigned_rate_limits": { 00:22:44.351 "rw_ios_per_sec": 0, 00:22:44.351 "rw_mbytes_per_sec": 0, 00:22:44.351 "r_mbytes_per_sec": 0, 00:22:44.351 "w_mbytes_per_sec": 0 00:22:44.351 }, 00:22:44.351 "claimed": false, 00:22:44.351 "zoned": false, 00:22:44.351 "supported_io_types": { 00:22:44.351 "read": true, 00:22:44.351 "write": true, 00:22:44.351 "unmap": false, 00:22:44.351 "flush": false, 00:22:44.351 "reset": true, 00:22:44.351 "nvme_admin": false, 00:22:44.351 "nvme_io": false, 00:22:44.351 "nvme_io_md": false, 00:22:44.351 "write_zeroes": true, 00:22:44.351 "zcopy": false, 00:22:44.351 "get_zone_info": false, 00:22:44.351 "zone_management": false, 00:22:44.351 "zone_append": false, 00:22:44.351 "compare": false, 00:22:44.351 "compare_and_write": false, 00:22:44.351 "abort": false, 00:22:44.351 "seek_hole": false, 00:22:44.351 "seek_data": false, 00:22:44.351 "copy": false, 00:22:44.351 "nvme_iov_md": false 00:22:44.351 }, 00:22:44.351 "memory_domains": [ 00:22:44.351 { 00:22:44.351 "dma_device_id": "system", 00:22:44.351 "dma_device_type": 1 00:22:44.351 }, 00:22:44.351 { 00:22:44.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:44.351 "dma_device_type": 2 00:22:44.351 }, 00:22:44.351 { 00:22:44.351 "dma_device_id": "system", 00:22:44.351 "dma_device_type": 1 00:22:44.351 }, 00:22:44.351 { 00:22:44.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:44.351 "dma_device_type": 2 00:22:44.351 } 00:22:44.351 ], 00:22:44.351 "driver_specific": { 00:22:44.351 "raid": { 00:22:44.351 "uuid": "c87b9234-f9d9-405a-b43f-de7d83369813", 00:22:44.351 "strip_size_kb": 0, 00:22:44.351 "state": "online", 00:22:44.351 "raid_level": "raid1", 00:22:44.351 "superblock": true, 00:22:44.351 "num_base_bdevs": 2, 00:22:44.351 "num_base_bdevs_discovered": 2, 00:22:44.351 "num_base_bdevs_operational": 2, 00:22:44.351 "base_bdevs_list": [ 00:22:44.351 { 00:22:44.351 "name": "pt1", 00:22:44.351 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:44.351 "is_configured": true, 00:22:44.351 "data_offset": 256, 00:22:44.351 "data_size": 7936 00:22:44.351 }, 00:22:44.351 { 00:22:44.351 "name": "pt2", 00:22:44.351 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:44.351 "is_configured": true, 00:22:44.351 "data_offset": 256, 00:22:44.351 "data_size": 7936 00:22:44.351 } 00:22:44.351 ] 00:22:44.351 } 00:22:44.351 } 00:22:44.351 }' 00:22:44.351 12:21:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:44.351 12:21:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:22:44.351 pt2' 00:22:44.351 12:21:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:44.351 12:21:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:22:44.351 12:21:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:44.351 12:21:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:22:44.351 12:21:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.351 12:21:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:44.351 12:21:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:44.351 12:21:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.351 12:21:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:22:44.351 12:21:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:22:44.351 12:21:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:44.351 12:21:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:44.351 12:21:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:22:44.351 12:21:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.351 12:21:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:44.351 12:21:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.351 12:21:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:22:44.351 12:21:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:22:44.351 12:21:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:22:44.351 12:21:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:44.351 12:21:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.351 12:21:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:44.351 [2024-11-25 12:21:40.406951] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:44.351 12:21:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.611 12:21:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' c87b9234-f9d9-405a-b43f-de7d83369813 '!=' c87b9234-f9d9-405a-b43f-de7d83369813 ']' 00:22:44.611 12:21:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:22:44.611 12:21:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:44.611 12:21:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:22:44.611 12:21:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:22:44.611 12:21:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.611 12:21:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:44.611 [2024-11-25 12:21:40.450732] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:22:44.611 12:21:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.611 12:21:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:44.611 12:21:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:44.611 12:21:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:44.611 12:21:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:44.611 12:21:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:44.611 12:21:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:44.611 12:21:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:44.611 12:21:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:44.611 12:21:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:44.611 12:21:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:44.611 12:21:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:44.611 12:21:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:44.611 12:21:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.611 12:21:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:44.611 12:21:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.611 12:21:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:44.611 "name": "raid_bdev1", 00:22:44.611 "uuid": "c87b9234-f9d9-405a-b43f-de7d83369813", 00:22:44.611 "strip_size_kb": 0, 00:22:44.611 "state": "online", 00:22:44.611 "raid_level": "raid1", 00:22:44.611 "superblock": true, 00:22:44.611 "num_base_bdevs": 2, 00:22:44.611 "num_base_bdevs_discovered": 1, 00:22:44.611 "num_base_bdevs_operational": 1, 00:22:44.611 "base_bdevs_list": [ 00:22:44.611 { 00:22:44.611 "name": null, 00:22:44.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:44.611 "is_configured": false, 00:22:44.611 "data_offset": 0, 00:22:44.611 "data_size": 7936 00:22:44.611 }, 00:22:44.611 { 00:22:44.611 "name": "pt2", 00:22:44.611 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:44.611 "is_configured": true, 00:22:44.611 "data_offset": 256, 00:22:44.611 "data_size": 7936 00:22:44.611 } 00:22:44.611 ] 00:22:44.611 }' 00:22:44.611 12:21:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:44.611 12:21:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:45.180 12:21:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:45.180 12:21:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.180 12:21:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:45.180 [2024-11-25 12:21:40.974902] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:45.180 [2024-11-25 12:21:40.974936] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:45.180 [2024-11-25 12:21:40.975077] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:45.180 [2024-11-25 12:21:40.975147] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:45.180 [2024-11-25 12:21:40.975167] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:22:45.180 12:21:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.180 12:21:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:45.180 12:21:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:22:45.180 12:21:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.180 12:21:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:45.180 12:21:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.180 12:21:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:22:45.180 12:21:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:22:45.180 12:21:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:22:45.180 12:21:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:45.180 12:21:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:22:45.180 12:21:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.180 12:21:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:45.180 12:21:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.180 12:21:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:22:45.180 12:21:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:45.180 12:21:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:22:45.180 12:21:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:22:45.180 12:21:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:22:45.180 12:21:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:45.180 12:21:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.180 12:21:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:45.180 [2024-11-25 12:21:41.042887] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:45.180 [2024-11-25 12:21:41.042976] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:45.180 [2024-11-25 12:21:41.043023] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:22:45.180 [2024-11-25 12:21:41.043051] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:45.180 [2024-11-25 12:21:41.046005] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:45.180 [2024-11-25 12:21:41.046054] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:45.180 [2024-11-25 12:21:41.046168] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:45.180 [2024-11-25 12:21:41.046231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:45.180 [2024-11-25 12:21:41.046393] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:22:45.180 [2024-11-25 12:21:41.046428] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:45.180 [2024-11-25 12:21:41.046725] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:45.180 [2024-11-25 12:21:41.046932] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:22:45.180 [2024-11-25 12:21:41.046948] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:22:45.180 [2024-11-25 12:21:41.047184] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:45.180 pt2 00:22:45.180 12:21:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.180 12:21:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:45.180 12:21:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:45.180 12:21:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:45.180 12:21:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:45.180 12:21:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:45.180 12:21:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:45.180 12:21:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:45.180 12:21:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:45.180 12:21:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:45.180 12:21:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:45.180 12:21:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:45.181 12:21:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:45.181 12:21:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.181 12:21:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:45.181 12:21:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.181 12:21:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:45.181 "name": "raid_bdev1", 00:22:45.181 "uuid": "c87b9234-f9d9-405a-b43f-de7d83369813", 00:22:45.181 "strip_size_kb": 0, 00:22:45.181 "state": "online", 00:22:45.181 "raid_level": "raid1", 00:22:45.181 "superblock": true, 00:22:45.181 "num_base_bdevs": 2, 00:22:45.181 "num_base_bdevs_discovered": 1, 00:22:45.181 "num_base_bdevs_operational": 1, 00:22:45.181 "base_bdevs_list": [ 00:22:45.181 { 00:22:45.181 "name": null, 00:22:45.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:45.181 "is_configured": false, 00:22:45.181 "data_offset": 256, 00:22:45.181 "data_size": 7936 00:22:45.181 }, 00:22:45.181 { 00:22:45.181 "name": "pt2", 00:22:45.181 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:45.181 "is_configured": true, 00:22:45.181 "data_offset": 256, 00:22:45.181 "data_size": 7936 00:22:45.181 } 00:22:45.181 ] 00:22:45.181 }' 00:22:45.181 12:21:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:45.181 12:21:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:45.749 12:21:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:45.749 12:21:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.749 12:21:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:45.749 [2024-11-25 12:21:41.551244] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:45.749 [2024-11-25 12:21:41.551282] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:45.749 [2024-11-25 12:21:41.551403] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:45.749 [2024-11-25 12:21:41.551492] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:45.749 [2024-11-25 12:21:41.551508] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:22:45.749 12:21:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.749 12:21:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:45.749 12:21:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.749 12:21:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:45.749 12:21:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:22:45.749 12:21:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.749 12:21:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:22:45.749 12:21:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:22:45.749 12:21:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:22:45.749 12:21:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:45.749 12:21:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.749 12:21:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:45.749 [2024-11-25 12:21:41.611239] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:45.749 [2024-11-25 12:21:41.611306] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:45.749 [2024-11-25 12:21:41.611353] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:22:45.749 [2024-11-25 12:21:41.611388] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:45.749 [2024-11-25 12:21:41.614330] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:45.749 [2024-11-25 12:21:41.614402] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:45.749 [2024-11-25 12:21:41.614536] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:45.749 [2024-11-25 12:21:41.614595] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:45.749 [2024-11-25 12:21:41.614773] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:22:45.749 [2024-11-25 12:21:41.614791] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:45.749 [2024-11-25 12:21:41.614812] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:22:45.749 [2024-11-25 12:21:41.614885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:45.749 [2024-11-25 12:21:41.614991] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:22:45.749 [2024-11-25 12:21:41.615012] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:45.749 [2024-11-25 12:21:41.615376] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:22:45.749 [2024-11-25 12:21:41.615571] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:22:45.749 [2024-11-25 12:21:41.615590] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:22:45.749 [2024-11-25 12:21:41.615816] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:45.749 pt1 00:22:45.749 12:21:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.749 12:21:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:22:45.749 12:21:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:45.749 12:21:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:45.749 12:21:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:45.749 12:21:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:45.749 12:21:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:45.749 12:21:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:45.749 12:21:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:45.749 12:21:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:45.749 12:21:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:45.749 12:21:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:45.749 12:21:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:45.749 12:21:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.749 12:21:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:45.749 12:21:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:45.749 12:21:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.749 12:21:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:45.749 "name": "raid_bdev1", 00:22:45.749 "uuid": "c87b9234-f9d9-405a-b43f-de7d83369813", 00:22:45.749 "strip_size_kb": 0, 00:22:45.749 "state": "online", 00:22:45.749 "raid_level": "raid1", 00:22:45.749 "superblock": true, 00:22:45.749 "num_base_bdevs": 2, 00:22:45.749 "num_base_bdevs_discovered": 1, 00:22:45.749 "num_base_bdevs_operational": 1, 00:22:45.749 "base_bdevs_list": [ 00:22:45.749 { 00:22:45.749 "name": null, 00:22:45.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:45.749 "is_configured": false, 00:22:45.749 "data_offset": 256, 00:22:45.749 "data_size": 7936 00:22:45.749 }, 00:22:45.749 { 00:22:45.749 "name": "pt2", 00:22:45.749 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:45.749 "is_configured": true, 00:22:45.749 "data_offset": 256, 00:22:45.749 "data_size": 7936 00:22:45.749 } 00:22:45.749 ] 00:22:45.749 }' 00:22:45.749 12:21:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:45.749 12:21:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:46.317 12:21:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:22:46.317 12:21:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.317 12:21:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:22:46.317 12:21:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:46.317 12:21:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.317 12:21:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:22:46.317 12:21:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:22:46.317 12:21:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:46.317 12:21:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.317 12:21:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:46.317 [2024-11-25 12:21:42.224365] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:46.317 12:21:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.317 12:21:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' c87b9234-f9d9-405a-b43f-de7d83369813 '!=' c87b9234-f9d9-405a-b43f-de7d83369813 ']' 00:22:46.317 12:21:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86586 00:22:46.317 12:21:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 86586 ']' 00:22:46.317 12:21:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 86586 00:22:46.317 12:21:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:22:46.317 12:21:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:46.317 12:21:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86586 00:22:46.317 12:21:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:46.317 12:21:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:46.317 12:21:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86586' 00:22:46.317 killing process with pid 86586 00:22:46.317 12:21:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 86586 00:22:46.317 [2024-11-25 12:21:42.309971] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:46.317 12:21:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 86586 00:22:46.317 [2024-11-25 12:21:42.310193] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:46.317 [2024-11-25 12:21:42.310262] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:46.317 [2024-11-25 12:21:42.310284] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:22:46.575 [2024-11-25 12:21:42.493720] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:47.511 12:21:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:22:47.511 00:22:47.511 real 0m6.770s 00:22:47.511 user 0m10.762s 00:22:47.511 sys 0m0.977s 00:22:47.511 12:21:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:47.511 12:21:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:47.511 ************************************ 00:22:47.511 END TEST raid_superblock_test_4k 00:22:47.511 ************************************ 00:22:47.511 12:21:43 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:22:47.511 12:21:43 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:22:47.511 12:21:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:22:47.511 12:21:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:47.511 12:21:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:47.511 ************************************ 00:22:47.511 START TEST raid_rebuild_test_sb_4k 00:22:47.511 ************************************ 00:22:47.511 12:21:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:22:47.511 12:21:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:22:47.511 12:21:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:22:47.511 12:21:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:22:47.511 12:21:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:22:47.511 12:21:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:22:47.511 12:21:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:22:47.511 12:21:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:47.511 12:21:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:22:47.511 12:21:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:47.511 12:21:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:47.511 12:21:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:22:47.511 12:21:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:47.511 12:21:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:47.511 12:21:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:47.511 12:21:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:22:47.511 12:21:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:22:47.511 12:21:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:22:47.511 12:21:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:22:47.511 12:21:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:22:47.511 12:21:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:22:47.511 12:21:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:22:47.511 12:21:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:22:47.511 12:21:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:22:47.511 12:21:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:22:47.512 12:21:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86920 00:22:47.512 12:21:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86920 00:22:47.512 12:21:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:47.512 12:21:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86920 ']' 00:22:47.512 12:21:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:47.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:47.512 12:21:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:47.512 12:21:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:47.512 12:21:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:47.512 12:21:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:47.770 [2024-11-25 12:21:43.687117] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:22:47.770 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:47.770 Zero copy mechanism will not be used. 00:22:47.770 [2024-11-25 12:21:43.687528] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86920 ] 00:22:48.045 [2024-11-25 12:21:43.872983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.045 [2024-11-25 12:21:44.002840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:48.303 [2024-11-25 12:21:44.215536] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:48.303 [2024-11-25 12:21:44.215743] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:48.870 12:21:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:48.870 12:21:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:22:48.870 12:21:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:48.870 12:21:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:22:48.870 12:21:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.870 12:21:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:48.870 BaseBdev1_malloc 00:22:48.870 12:21:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.870 12:21:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:48.870 12:21:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.871 12:21:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:48.871 [2024-11-25 12:21:44.717689] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:48.871 [2024-11-25 12:21:44.717775] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:48.871 [2024-11-25 12:21:44.717811] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:48.871 [2024-11-25 12:21:44.717829] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:48.871 [2024-11-25 12:21:44.720634] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:48.871 [2024-11-25 12:21:44.720686] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:48.871 BaseBdev1 00:22:48.871 12:21:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.871 12:21:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:48.871 12:21:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:22:48.871 12:21:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.871 12:21:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:48.871 BaseBdev2_malloc 00:22:48.871 12:21:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.871 12:21:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:48.871 12:21:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.871 12:21:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:48.871 [2024-11-25 12:21:44.774445] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:48.871 [2024-11-25 12:21:44.774543] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:48.871 [2024-11-25 12:21:44.774573] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:48.871 [2024-11-25 12:21:44.774594] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:48.871 [2024-11-25 12:21:44.777322] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:48.871 [2024-11-25 12:21:44.777388] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:48.871 BaseBdev2 00:22:48.871 12:21:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.871 12:21:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:22:48.871 12:21:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.871 12:21:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:48.871 spare_malloc 00:22:48.871 12:21:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.871 12:21:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:48.871 12:21:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.871 12:21:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:48.871 spare_delay 00:22:48.871 12:21:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.871 12:21:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:48.871 12:21:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.871 12:21:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:48.871 [2024-11-25 12:21:44.851362] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:48.871 [2024-11-25 12:21:44.851568] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:48.871 [2024-11-25 12:21:44.851608] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:22:48.871 [2024-11-25 12:21:44.851628] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:48.871 [2024-11-25 12:21:44.854419] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:48.871 [2024-11-25 12:21:44.854468] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:48.871 spare 00:22:48.871 12:21:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.871 12:21:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:22:48.871 12:21:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.871 12:21:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:48.871 [2024-11-25 12:21:44.859450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:48.871 [2024-11-25 12:21:44.861805] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:48.871 [2024-11-25 12:21:44.862027] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:48.871 [2024-11-25 12:21:44.862052] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:48.871 [2024-11-25 12:21:44.862393] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:48.871 [2024-11-25 12:21:44.862624] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:48.871 [2024-11-25 12:21:44.862641] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:48.871 [2024-11-25 12:21:44.862833] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:48.871 12:21:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.871 12:21:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:48.871 12:21:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:48.871 12:21:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:48.871 12:21:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:48.871 12:21:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:48.871 12:21:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:48.871 12:21:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:48.871 12:21:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:48.871 12:21:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:48.871 12:21:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:48.871 12:21:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:48.871 12:21:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:48.871 12:21:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.871 12:21:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:48.871 12:21:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.871 12:21:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:48.871 "name": "raid_bdev1", 00:22:48.871 "uuid": "c687ac72-c7c6-4598-becc-46ea31bf4a00", 00:22:48.871 "strip_size_kb": 0, 00:22:48.871 "state": "online", 00:22:48.871 "raid_level": "raid1", 00:22:48.871 "superblock": true, 00:22:48.871 "num_base_bdevs": 2, 00:22:48.871 "num_base_bdevs_discovered": 2, 00:22:48.871 "num_base_bdevs_operational": 2, 00:22:48.871 "base_bdevs_list": [ 00:22:48.871 { 00:22:48.871 "name": "BaseBdev1", 00:22:48.871 "uuid": "6ac1f18d-661e-5e49-b46e-90567198b601", 00:22:48.871 "is_configured": true, 00:22:48.871 "data_offset": 256, 00:22:48.871 "data_size": 7936 00:22:48.871 }, 00:22:48.871 { 00:22:48.871 "name": "BaseBdev2", 00:22:48.871 "uuid": "2fa20f7a-9422-5439-85c3-3a57c8bdaa43", 00:22:48.871 "is_configured": true, 00:22:48.871 "data_offset": 256, 00:22:48.871 "data_size": 7936 00:22:48.871 } 00:22:48.871 ] 00:22:48.871 }' 00:22:48.871 12:21:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:48.871 12:21:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:49.439 12:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:49.439 12:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:22:49.439 12:21:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.439 12:21:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:49.439 [2024-11-25 12:21:45.411936] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:49.439 12:21:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.439 12:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:22:49.439 12:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:49.439 12:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:49.439 12:21:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.439 12:21:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:49.439 12:21:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.439 12:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:22:49.439 12:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:22:49.439 12:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:22:49.439 12:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:22:49.439 12:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:22:49.439 12:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:22:49.439 12:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:22:49.439 12:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:49.439 12:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:49.439 12:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:49.439 12:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:22:49.439 12:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:49.439 12:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:49.439 12:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:22:50.005 [2024-11-25 12:21:45.815755] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:22:50.005 /dev/nbd0 00:22:50.005 12:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:50.005 12:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:50.005 12:21:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:50.005 12:21:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:22:50.005 12:21:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:50.005 12:21:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:50.005 12:21:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:50.005 12:21:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:22:50.005 12:21:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:50.005 12:21:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:50.005 12:21:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:50.005 1+0 records in 00:22:50.005 1+0 records out 00:22:50.005 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000491965 s, 8.3 MB/s 00:22:50.005 12:21:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:50.005 12:21:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:22:50.005 12:21:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:50.005 12:21:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:50.005 12:21:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:22:50.005 12:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:50.005 12:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:50.005 12:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:22:50.005 12:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:22:50.005 12:21:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:22:50.938 7936+0 records in 00:22:50.938 7936+0 records out 00:22:50.938 32505856 bytes (33 MB, 31 MiB) copied, 0.904976 s, 35.9 MB/s 00:22:50.938 12:21:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:22:50.938 12:21:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:50.938 12:21:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:50.938 12:21:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:50.938 12:21:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:22:50.938 12:21:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:50.938 12:21:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:22:51.197 12:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:51.197 [2024-11-25 12:21:47.120069] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:51.197 12:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:51.197 12:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:51.197 12:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:51.197 12:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:51.197 12:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:51.197 12:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:22:51.197 12:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:22:51.197 12:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:22:51.197 12:21:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.197 12:21:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:51.197 [2024-11-25 12:21:47.128197] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:51.197 12:21:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.197 12:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:51.197 12:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:51.197 12:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:51.197 12:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:51.197 12:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:51.197 12:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:51.197 12:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:51.197 12:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:51.197 12:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:51.197 12:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:51.197 12:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:51.197 12:21:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.197 12:21:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:51.197 12:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:51.197 12:21:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.197 12:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:51.197 "name": "raid_bdev1", 00:22:51.197 "uuid": "c687ac72-c7c6-4598-becc-46ea31bf4a00", 00:22:51.197 "strip_size_kb": 0, 00:22:51.197 "state": "online", 00:22:51.197 "raid_level": "raid1", 00:22:51.197 "superblock": true, 00:22:51.197 "num_base_bdevs": 2, 00:22:51.197 "num_base_bdevs_discovered": 1, 00:22:51.197 "num_base_bdevs_operational": 1, 00:22:51.197 "base_bdevs_list": [ 00:22:51.197 { 00:22:51.197 "name": null, 00:22:51.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:51.197 "is_configured": false, 00:22:51.197 "data_offset": 0, 00:22:51.197 "data_size": 7936 00:22:51.197 }, 00:22:51.197 { 00:22:51.197 "name": "BaseBdev2", 00:22:51.197 "uuid": "2fa20f7a-9422-5439-85c3-3a57c8bdaa43", 00:22:51.197 "is_configured": true, 00:22:51.197 "data_offset": 256, 00:22:51.197 "data_size": 7936 00:22:51.197 } 00:22:51.197 ] 00:22:51.197 }' 00:22:51.197 12:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:51.197 12:21:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:51.763 12:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:51.763 12:21:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.763 12:21:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:51.763 [2024-11-25 12:21:47.656356] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:51.763 [2024-11-25 12:21:47.672637] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:22:51.763 12:21:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.763 12:21:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:22:51.763 [2024-11-25 12:21:47.675181] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:52.699 12:21:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:52.699 12:21:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:52.699 12:21:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:52.699 12:21:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:52.699 12:21:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:52.699 12:21:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:52.699 12:21:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.699 12:21:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:52.699 12:21:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:52.699 12:21:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.699 12:21:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:52.699 "name": "raid_bdev1", 00:22:52.699 "uuid": "c687ac72-c7c6-4598-becc-46ea31bf4a00", 00:22:52.699 "strip_size_kb": 0, 00:22:52.699 "state": "online", 00:22:52.699 "raid_level": "raid1", 00:22:52.699 "superblock": true, 00:22:52.699 "num_base_bdevs": 2, 00:22:52.699 "num_base_bdevs_discovered": 2, 00:22:52.699 "num_base_bdevs_operational": 2, 00:22:52.699 "process": { 00:22:52.699 "type": "rebuild", 00:22:52.699 "target": "spare", 00:22:52.699 "progress": { 00:22:52.699 "blocks": 2560, 00:22:52.699 "percent": 32 00:22:52.699 } 00:22:52.699 }, 00:22:52.699 "base_bdevs_list": [ 00:22:52.699 { 00:22:52.699 "name": "spare", 00:22:52.699 "uuid": "b28cffba-3467-517d-80fe-eb02f196eb88", 00:22:52.699 "is_configured": true, 00:22:52.699 "data_offset": 256, 00:22:52.699 "data_size": 7936 00:22:52.699 }, 00:22:52.699 { 00:22:52.699 "name": "BaseBdev2", 00:22:52.699 "uuid": "2fa20f7a-9422-5439-85c3-3a57c8bdaa43", 00:22:52.699 "is_configured": true, 00:22:52.699 "data_offset": 256, 00:22:52.699 "data_size": 7936 00:22:52.699 } 00:22:52.699 ] 00:22:52.699 }' 00:22:52.699 12:21:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:52.699 12:21:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:52.699 12:21:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:52.958 12:21:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:52.958 12:21:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:22:52.958 12:21:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.958 12:21:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:52.958 [2024-11-25 12:21:48.828668] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:52.958 [2024-11-25 12:21:48.884028] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:52.958 [2024-11-25 12:21:48.884157] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:52.958 [2024-11-25 12:21:48.884182] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:52.958 [2024-11-25 12:21:48.884196] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:52.958 12:21:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.958 12:21:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:52.958 12:21:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:52.958 12:21:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:52.958 12:21:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:52.958 12:21:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:52.958 12:21:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:52.958 12:21:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:52.958 12:21:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:52.958 12:21:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:52.958 12:21:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:52.958 12:21:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:52.958 12:21:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:52.958 12:21:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.958 12:21:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:52.958 12:21:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.958 12:21:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:52.958 "name": "raid_bdev1", 00:22:52.958 "uuid": "c687ac72-c7c6-4598-becc-46ea31bf4a00", 00:22:52.958 "strip_size_kb": 0, 00:22:52.958 "state": "online", 00:22:52.958 "raid_level": "raid1", 00:22:52.958 "superblock": true, 00:22:52.958 "num_base_bdevs": 2, 00:22:52.958 "num_base_bdevs_discovered": 1, 00:22:52.958 "num_base_bdevs_operational": 1, 00:22:52.958 "base_bdevs_list": [ 00:22:52.958 { 00:22:52.958 "name": null, 00:22:52.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:52.958 "is_configured": false, 00:22:52.958 "data_offset": 0, 00:22:52.958 "data_size": 7936 00:22:52.958 }, 00:22:52.958 { 00:22:52.958 "name": "BaseBdev2", 00:22:52.958 "uuid": "2fa20f7a-9422-5439-85c3-3a57c8bdaa43", 00:22:52.958 "is_configured": true, 00:22:52.958 "data_offset": 256, 00:22:52.958 "data_size": 7936 00:22:52.958 } 00:22:52.958 ] 00:22:52.958 }' 00:22:52.958 12:21:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:52.958 12:21:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:53.526 12:21:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:53.526 12:21:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:53.526 12:21:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:53.526 12:21:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:53.526 12:21:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:53.526 12:21:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:53.526 12:21:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.526 12:21:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:53.526 12:21:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:53.526 12:21:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.526 12:21:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:53.526 "name": "raid_bdev1", 00:22:53.526 "uuid": "c687ac72-c7c6-4598-becc-46ea31bf4a00", 00:22:53.526 "strip_size_kb": 0, 00:22:53.526 "state": "online", 00:22:53.526 "raid_level": "raid1", 00:22:53.526 "superblock": true, 00:22:53.526 "num_base_bdevs": 2, 00:22:53.526 "num_base_bdevs_discovered": 1, 00:22:53.526 "num_base_bdevs_operational": 1, 00:22:53.526 "base_bdevs_list": [ 00:22:53.526 { 00:22:53.526 "name": null, 00:22:53.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:53.527 "is_configured": false, 00:22:53.527 "data_offset": 0, 00:22:53.527 "data_size": 7936 00:22:53.527 }, 00:22:53.527 { 00:22:53.527 "name": "BaseBdev2", 00:22:53.527 "uuid": "2fa20f7a-9422-5439-85c3-3a57c8bdaa43", 00:22:53.527 "is_configured": true, 00:22:53.527 "data_offset": 256, 00:22:53.527 "data_size": 7936 00:22:53.527 } 00:22:53.527 ] 00:22:53.527 }' 00:22:53.527 12:21:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:53.527 12:21:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:53.527 12:21:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:53.527 12:21:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:53.527 12:21:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:53.527 12:21:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.527 12:21:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:53.527 [2024-11-25 12:21:49.528284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:53.527 [2024-11-25 12:21:49.543724] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:22:53.527 12:21:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.527 12:21:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:22:53.527 [2024-11-25 12:21:49.546151] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:54.461 12:21:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:54.461 12:21:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:54.461 12:21:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:54.461 12:21:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:54.461 12:21:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:54.720 12:21:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:54.720 12:21:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.720 12:21:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:54.720 12:21:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:54.720 12:21:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.720 12:21:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:54.720 "name": "raid_bdev1", 00:22:54.720 "uuid": "c687ac72-c7c6-4598-becc-46ea31bf4a00", 00:22:54.720 "strip_size_kb": 0, 00:22:54.720 "state": "online", 00:22:54.720 "raid_level": "raid1", 00:22:54.720 "superblock": true, 00:22:54.720 "num_base_bdevs": 2, 00:22:54.720 "num_base_bdevs_discovered": 2, 00:22:54.720 "num_base_bdevs_operational": 2, 00:22:54.720 "process": { 00:22:54.720 "type": "rebuild", 00:22:54.720 "target": "spare", 00:22:54.720 "progress": { 00:22:54.720 "blocks": 2560, 00:22:54.720 "percent": 32 00:22:54.720 } 00:22:54.720 }, 00:22:54.720 "base_bdevs_list": [ 00:22:54.720 { 00:22:54.720 "name": "spare", 00:22:54.720 "uuid": "b28cffba-3467-517d-80fe-eb02f196eb88", 00:22:54.720 "is_configured": true, 00:22:54.720 "data_offset": 256, 00:22:54.720 "data_size": 7936 00:22:54.720 }, 00:22:54.720 { 00:22:54.720 "name": "BaseBdev2", 00:22:54.720 "uuid": "2fa20f7a-9422-5439-85c3-3a57c8bdaa43", 00:22:54.720 "is_configured": true, 00:22:54.720 "data_offset": 256, 00:22:54.720 "data_size": 7936 00:22:54.720 } 00:22:54.720 ] 00:22:54.720 }' 00:22:54.720 12:21:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:54.720 12:21:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:54.720 12:21:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:54.720 12:21:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:54.720 12:21:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:22:54.720 12:21:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:22:54.720 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:22:54.720 12:21:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:22:54.720 12:21:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:22:54.720 12:21:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:22:54.720 12:21:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=729 00:22:54.720 12:21:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:54.720 12:21:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:54.720 12:21:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:54.720 12:21:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:54.720 12:21:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:54.720 12:21:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:54.720 12:21:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:54.720 12:21:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.720 12:21:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:54.720 12:21:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:54.720 12:21:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.720 12:21:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:54.720 "name": "raid_bdev1", 00:22:54.720 "uuid": "c687ac72-c7c6-4598-becc-46ea31bf4a00", 00:22:54.720 "strip_size_kb": 0, 00:22:54.720 "state": "online", 00:22:54.720 "raid_level": "raid1", 00:22:54.720 "superblock": true, 00:22:54.720 "num_base_bdevs": 2, 00:22:54.720 "num_base_bdevs_discovered": 2, 00:22:54.720 "num_base_bdevs_operational": 2, 00:22:54.720 "process": { 00:22:54.720 "type": "rebuild", 00:22:54.720 "target": "spare", 00:22:54.720 "progress": { 00:22:54.720 "blocks": 2816, 00:22:54.720 "percent": 35 00:22:54.720 } 00:22:54.720 }, 00:22:54.720 "base_bdevs_list": [ 00:22:54.720 { 00:22:54.720 "name": "spare", 00:22:54.720 "uuid": "b28cffba-3467-517d-80fe-eb02f196eb88", 00:22:54.720 "is_configured": true, 00:22:54.720 "data_offset": 256, 00:22:54.720 "data_size": 7936 00:22:54.720 }, 00:22:54.720 { 00:22:54.720 "name": "BaseBdev2", 00:22:54.720 "uuid": "2fa20f7a-9422-5439-85c3-3a57c8bdaa43", 00:22:54.720 "is_configured": true, 00:22:54.720 "data_offset": 256, 00:22:54.720 "data_size": 7936 00:22:54.720 } 00:22:54.720 ] 00:22:54.720 }' 00:22:54.720 12:21:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:54.979 12:21:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:54.980 12:21:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:54.980 12:21:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:54.980 12:21:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:55.915 12:21:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:55.915 12:21:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:55.915 12:21:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:55.915 12:21:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:55.915 12:21:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:55.915 12:21:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:55.915 12:21:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:55.915 12:21:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:55.915 12:21:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.915 12:21:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:55.915 12:21:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.915 12:21:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:55.915 "name": "raid_bdev1", 00:22:55.915 "uuid": "c687ac72-c7c6-4598-becc-46ea31bf4a00", 00:22:55.915 "strip_size_kb": 0, 00:22:55.915 "state": "online", 00:22:55.915 "raid_level": "raid1", 00:22:55.915 "superblock": true, 00:22:55.915 "num_base_bdevs": 2, 00:22:55.915 "num_base_bdevs_discovered": 2, 00:22:55.915 "num_base_bdevs_operational": 2, 00:22:55.915 "process": { 00:22:55.915 "type": "rebuild", 00:22:55.915 "target": "spare", 00:22:55.915 "progress": { 00:22:55.915 "blocks": 5888, 00:22:55.915 "percent": 74 00:22:55.915 } 00:22:55.915 }, 00:22:55.915 "base_bdevs_list": [ 00:22:55.915 { 00:22:55.915 "name": "spare", 00:22:55.915 "uuid": "b28cffba-3467-517d-80fe-eb02f196eb88", 00:22:55.915 "is_configured": true, 00:22:55.915 "data_offset": 256, 00:22:55.915 "data_size": 7936 00:22:55.915 }, 00:22:55.915 { 00:22:55.915 "name": "BaseBdev2", 00:22:55.915 "uuid": "2fa20f7a-9422-5439-85c3-3a57c8bdaa43", 00:22:55.915 "is_configured": true, 00:22:55.915 "data_offset": 256, 00:22:55.915 "data_size": 7936 00:22:55.915 } 00:22:55.915 ] 00:22:55.915 }' 00:22:55.915 12:21:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:55.915 12:21:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:55.915 12:21:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:56.174 12:21:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:56.174 12:21:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:56.740 [2024-11-25 12:21:52.668548] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:56.740 [2024-11-25 12:21:52.668654] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:56.740 [2024-11-25 12:21:52.668795] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:56.998 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:56.998 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:56.998 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:56.998 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:56.998 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:56.998 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:56.998 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:56.998 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.998 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:56.998 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:56.998 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.257 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:57.257 "name": "raid_bdev1", 00:22:57.257 "uuid": "c687ac72-c7c6-4598-becc-46ea31bf4a00", 00:22:57.257 "strip_size_kb": 0, 00:22:57.257 "state": "online", 00:22:57.257 "raid_level": "raid1", 00:22:57.257 "superblock": true, 00:22:57.257 "num_base_bdevs": 2, 00:22:57.257 "num_base_bdevs_discovered": 2, 00:22:57.257 "num_base_bdevs_operational": 2, 00:22:57.257 "base_bdevs_list": [ 00:22:57.257 { 00:22:57.257 "name": "spare", 00:22:57.257 "uuid": "b28cffba-3467-517d-80fe-eb02f196eb88", 00:22:57.257 "is_configured": true, 00:22:57.257 "data_offset": 256, 00:22:57.257 "data_size": 7936 00:22:57.257 }, 00:22:57.257 { 00:22:57.257 "name": "BaseBdev2", 00:22:57.257 "uuid": "2fa20f7a-9422-5439-85c3-3a57c8bdaa43", 00:22:57.257 "is_configured": true, 00:22:57.257 "data_offset": 256, 00:22:57.257 "data_size": 7936 00:22:57.257 } 00:22:57.257 ] 00:22:57.257 }' 00:22:57.257 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:57.257 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:57.257 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:57.257 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:22:57.257 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:22:57.257 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:57.257 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:57.257 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:57.257 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:57.257 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:57.257 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:57.257 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.257 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:57.257 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:57.257 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.257 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:57.257 "name": "raid_bdev1", 00:22:57.257 "uuid": "c687ac72-c7c6-4598-becc-46ea31bf4a00", 00:22:57.257 "strip_size_kb": 0, 00:22:57.257 "state": "online", 00:22:57.257 "raid_level": "raid1", 00:22:57.257 "superblock": true, 00:22:57.257 "num_base_bdevs": 2, 00:22:57.257 "num_base_bdevs_discovered": 2, 00:22:57.257 "num_base_bdevs_operational": 2, 00:22:57.257 "base_bdevs_list": [ 00:22:57.257 { 00:22:57.257 "name": "spare", 00:22:57.257 "uuid": "b28cffba-3467-517d-80fe-eb02f196eb88", 00:22:57.257 "is_configured": true, 00:22:57.257 "data_offset": 256, 00:22:57.257 "data_size": 7936 00:22:57.257 }, 00:22:57.257 { 00:22:57.257 "name": "BaseBdev2", 00:22:57.257 "uuid": "2fa20f7a-9422-5439-85c3-3a57c8bdaa43", 00:22:57.257 "is_configured": true, 00:22:57.257 "data_offset": 256, 00:22:57.257 "data_size": 7936 00:22:57.257 } 00:22:57.257 ] 00:22:57.257 }' 00:22:57.257 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:57.257 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:57.257 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:57.516 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:57.516 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:57.516 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:57.516 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:57.516 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:57.516 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:57.516 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:57.516 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:57.516 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:57.516 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:57.516 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:57.516 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:57.516 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.516 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:57.516 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:57.516 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.516 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:57.516 "name": "raid_bdev1", 00:22:57.516 "uuid": "c687ac72-c7c6-4598-becc-46ea31bf4a00", 00:22:57.516 "strip_size_kb": 0, 00:22:57.516 "state": "online", 00:22:57.516 "raid_level": "raid1", 00:22:57.516 "superblock": true, 00:22:57.516 "num_base_bdevs": 2, 00:22:57.516 "num_base_bdevs_discovered": 2, 00:22:57.516 "num_base_bdevs_operational": 2, 00:22:57.516 "base_bdevs_list": [ 00:22:57.516 { 00:22:57.516 "name": "spare", 00:22:57.516 "uuid": "b28cffba-3467-517d-80fe-eb02f196eb88", 00:22:57.516 "is_configured": true, 00:22:57.516 "data_offset": 256, 00:22:57.516 "data_size": 7936 00:22:57.516 }, 00:22:57.516 { 00:22:57.516 "name": "BaseBdev2", 00:22:57.516 "uuid": "2fa20f7a-9422-5439-85c3-3a57c8bdaa43", 00:22:57.516 "is_configured": true, 00:22:57.516 "data_offset": 256, 00:22:57.516 "data_size": 7936 00:22:57.516 } 00:22:57.516 ] 00:22:57.516 }' 00:22:57.516 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:57.516 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:58.083 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:58.083 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.083 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:58.083 [2024-11-25 12:21:53.872284] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:58.083 [2024-11-25 12:21:53.872326] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:58.083 [2024-11-25 12:21:53.872442] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:58.083 [2024-11-25 12:21:53.872533] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:58.083 [2024-11-25 12:21:53.872561] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:58.083 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.083 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:22:58.083 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:58.083 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.083 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:58.083 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.083 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:22:58.083 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:22:58.083 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:22:58.083 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:58.083 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:22:58.083 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:22:58.083 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:58.083 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:58.083 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:58.083 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:22:58.083 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:58.083 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:58.083 12:21:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:22:58.342 /dev/nbd0 00:22:58.342 12:21:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:58.342 12:21:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:58.342 12:21:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:58.342 12:21:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:22:58.342 12:21:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:58.342 12:21:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:58.342 12:21:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:58.342 12:21:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:22:58.342 12:21:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:58.342 12:21:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:58.342 12:21:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:58.342 1+0 records in 00:22:58.342 1+0 records out 00:22:58.342 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000348833 s, 11.7 MB/s 00:22:58.342 12:21:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:58.342 12:21:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:22:58.342 12:21:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:58.342 12:21:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:58.342 12:21:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:22:58.342 12:21:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:58.342 12:21:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:58.342 12:21:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:22:58.600 /dev/nbd1 00:22:58.600 12:21:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:58.600 12:21:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:58.600 12:21:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:22:58.600 12:21:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:22:58.600 12:21:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:58.600 12:21:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:58.600 12:21:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:22:58.600 12:21:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:22:58.600 12:21:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:58.600 12:21:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:58.600 12:21:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:58.600 1+0 records in 00:22:58.600 1+0 records out 00:22:58.600 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000340728 s, 12.0 MB/s 00:22:58.600 12:21:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:58.600 12:21:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:22:58.600 12:21:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:58.600 12:21:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:58.600 12:21:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:22:58.600 12:21:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:58.600 12:21:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:58.600 12:21:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:22:58.858 12:21:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:22:58.858 12:21:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:58.858 12:21:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:58.858 12:21:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:58.858 12:21:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:22:58.858 12:21:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:58.858 12:21:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:22:59.117 12:21:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:59.117 12:21:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:59.117 12:21:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:59.117 12:21:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:59.117 12:21:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:59.117 12:21:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:59.117 12:21:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:22:59.117 12:21:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:22:59.117 12:21:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:59.117 12:21:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:22:59.376 12:21:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:59.376 12:21:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:59.376 12:21:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:59.376 12:21:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:59.376 12:21:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:59.376 12:21:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:59.376 12:21:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:22:59.376 12:21:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:22:59.376 12:21:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:22:59.376 12:21:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:22:59.376 12:21:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.376 12:21:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:59.376 12:21:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.376 12:21:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:59.376 12:21:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.376 12:21:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:59.376 [2024-11-25 12:21:55.394831] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:59.376 [2024-11-25 12:21:55.394897] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:59.376 [2024-11-25 12:21:55.394930] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:22:59.376 [2024-11-25 12:21:55.394946] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:59.376 [2024-11-25 12:21:55.397760] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:59.376 [2024-11-25 12:21:55.397806] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:59.376 [2024-11-25 12:21:55.397924] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:59.376 [2024-11-25 12:21:55.397992] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:59.376 [2024-11-25 12:21:55.398195] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:59.376 spare 00:22:59.376 12:21:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.376 12:21:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:22:59.376 12:21:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.376 12:21:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:59.635 [2024-11-25 12:21:55.498347] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:22:59.635 [2024-11-25 12:21:55.498417] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:59.635 [2024-11-25 12:21:55.498843] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:22:59.635 [2024-11-25 12:21:55.499119] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:22:59.635 [2024-11-25 12:21:55.499147] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:22:59.635 [2024-11-25 12:21:55.499416] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:59.635 12:21:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.635 12:21:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:59.635 12:21:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:59.635 12:21:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:59.635 12:21:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:59.635 12:21:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:59.635 12:21:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:59.635 12:21:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:59.635 12:21:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:59.635 12:21:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:59.635 12:21:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:59.635 12:21:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:59.635 12:21:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.635 12:21:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:59.635 12:21:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:59.635 12:21:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.635 12:21:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:59.635 "name": "raid_bdev1", 00:22:59.635 "uuid": "c687ac72-c7c6-4598-becc-46ea31bf4a00", 00:22:59.635 "strip_size_kb": 0, 00:22:59.635 "state": "online", 00:22:59.635 "raid_level": "raid1", 00:22:59.635 "superblock": true, 00:22:59.635 "num_base_bdevs": 2, 00:22:59.635 "num_base_bdevs_discovered": 2, 00:22:59.635 "num_base_bdevs_operational": 2, 00:22:59.635 "base_bdevs_list": [ 00:22:59.635 { 00:22:59.635 "name": "spare", 00:22:59.635 "uuid": "b28cffba-3467-517d-80fe-eb02f196eb88", 00:22:59.635 "is_configured": true, 00:22:59.635 "data_offset": 256, 00:22:59.635 "data_size": 7936 00:22:59.635 }, 00:22:59.635 { 00:22:59.635 "name": "BaseBdev2", 00:22:59.635 "uuid": "2fa20f7a-9422-5439-85c3-3a57c8bdaa43", 00:22:59.635 "is_configured": true, 00:22:59.635 "data_offset": 256, 00:22:59.635 "data_size": 7936 00:22:59.635 } 00:22:59.635 ] 00:22:59.635 }' 00:22:59.635 12:21:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:59.635 12:21:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:00.202 12:21:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:00.202 12:21:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:00.202 12:21:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:00.202 12:21:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:00.202 12:21:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:00.202 12:21:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:00.202 12:21:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.202 12:21:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:00.202 12:21:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:00.202 12:21:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.202 12:21:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:00.202 "name": "raid_bdev1", 00:23:00.202 "uuid": "c687ac72-c7c6-4598-becc-46ea31bf4a00", 00:23:00.202 "strip_size_kb": 0, 00:23:00.202 "state": "online", 00:23:00.202 "raid_level": "raid1", 00:23:00.202 "superblock": true, 00:23:00.202 "num_base_bdevs": 2, 00:23:00.202 "num_base_bdevs_discovered": 2, 00:23:00.202 "num_base_bdevs_operational": 2, 00:23:00.202 "base_bdevs_list": [ 00:23:00.202 { 00:23:00.202 "name": "spare", 00:23:00.202 "uuid": "b28cffba-3467-517d-80fe-eb02f196eb88", 00:23:00.202 "is_configured": true, 00:23:00.202 "data_offset": 256, 00:23:00.202 "data_size": 7936 00:23:00.202 }, 00:23:00.202 { 00:23:00.202 "name": "BaseBdev2", 00:23:00.202 "uuid": "2fa20f7a-9422-5439-85c3-3a57c8bdaa43", 00:23:00.202 "is_configured": true, 00:23:00.202 "data_offset": 256, 00:23:00.202 "data_size": 7936 00:23:00.202 } 00:23:00.202 ] 00:23:00.202 }' 00:23:00.202 12:21:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:00.202 12:21:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:00.202 12:21:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:00.202 12:21:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:00.202 12:21:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:00.202 12:21:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.202 12:21:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:00.202 12:21:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:23:00.202 12:21:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.202 12:21:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:23:00.202 12:21:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:23:00.202 12:21:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.202 12:21:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:00.202 [2024-11-25 12:21:56.215549] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:00.202 12:21:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.202 12:21:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:00.202 12:21:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:00.202 12:21:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:00.202 12:21:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:00.202 12:21:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:00.202 12:21:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:00.202 12:21:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:00.202 12:21:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:00.202 12:21:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:00.202 12:21:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:00.202 12:21:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:00.202 12:21:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:00.202 12:21:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.202 12:21:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:00.202 12:21:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.202 12:21:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:00.202 "name": "raid_bdev1", 00:23:00.202 "uuid": "c687ac72-c7c6-4598-becc-46ea31bf4a00", 00:23:00.202 "strip_size_kb": 0, 00:23:00.202 "state": "online", 00:23:00.202 "raid_level": "raid1", 00:23:00.202 "superblock": true, 00:23:00.202 "num_base_bdevs": 2, 00:23:00.202 "num_base_bdevs_discovered": 1, 00:23:00.202 "num_base_bdevs_operational": 1, 00:23:00.202 "base_bdevs_list": [ 00:23:00.202 { 00:23:00.202 "name": null, 00:23:00.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:00.202 "is_configured": false, 00:23:00.202 "data_offset": 0, 00:23:00.202 "data_size": 7936 00:23:00.202 }, 00:23:00.202 { 00:23:00.202 "name": "BaseBdev2", 00:23:00.202 "uuid": "2fa20f7a-9422-5439-85c3-3a57c8bdaa43", 00:23:00.202 "is_configured": true, 00:23:00.202 "data_offset": 256, 00:23:00.202 "data_size": 7936 00:23:00.202 } 00:23:00.202 ] 00:23:00.202 }' 00:23:00.202 12:21:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:00.202 12:21:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:00.768 12:21:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:00.768 12:21:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.768 12:21:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:00.768 [2024-11-25 12:21:56.752089] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:00.768 [2024-11-25 12:21:56.752356] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:00.768 [2024-11-25 12:21:56.752396] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:00.768 [2024-11-25 12:21:56.752440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:00.768 [2024-11-25 12:21:56.767779] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:23:00.768 12:21:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.768 12:21:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:23:00.768 [2024-11-25 12:21:56.770242] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:01.702 12:21:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:01.703 12:21:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:01.703 12:21:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:01.703 12:21:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:01.703 12:21:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:01.703 12:21:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:01.703 12:21:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.703 12:21:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:01.703 12:21:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:01.703 12:21:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.961 12:21:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:01.961 "name": "raid_bdev1", 00:23:01.961 "uuid": "c687ac72-c7c6-4598-becc-46ea31bf4a00", 00:23:01.961 "strip_size_kb": 0, 00:23:01.961 "state": "online", 00:23:01.961 "raid_level": "raid1", 00:23:01.961 "superblock": true, 00:23:01.961 "num_base_bdevs": 2, 00:23:01.961 "num_base_bdevs_discovered": 2, 00:23:01.961 "num_base_bdevs_operational": 2, 00:23:01.961 "process": { 00:23:01.961 "type": "rebuild", 00:23:01.961 "target": "spare", 00:23:01.961 "progress": { 00:23:01.961 "blocks": 2560, 00:23:01.961 "percent": 32 00:23:01.961 } 00:23:01.961 }, 00:23:01.962 "base_bdevs_list": [ 00:23:01.962 { 00:23:01.962 "name": "spare", 00:23:01.962 "uuid": "b28cffba-3467-517d-80fe-eb02f196eb88", 00:23:01.962 "is_configured": true, 00:23:01.962 "data_offset": 256, 00:23:01.962 "data_size": 7936 00:23:01.962 }, 00:23:01.962 { 00:23:01.962 "name": "BaseBdev2", 00:23:01.962 "uuid": "2fa20f7a-9422-5439-85c3-3a57c8bdaa43", 00:23:01.962 "is_configured": true, 00:23:01.962 "data_offset": 256, 00:23:01.962 "data_size": 7936 00:23:01.962 } 00:23:01.962 ] 00:23:01.962 }' 00:23:01.962 12:21:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:01.962 12:21:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:01.962 12:21:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:01.962 12:21:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:01.962 12:21:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:23:01.962 12:21:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.962 12:21:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:01.962 [2024-11-25 12:21:57.944657] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:01.962 [2024-11-25 12:21:57.979238] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:01.962 [2024-11-25 12:21:57.979373] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:01.962 [2024-11-25 12:21:57.979399] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:01.962 [2024-11-25 12:21:57.979426] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:01.962 12:21:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.962 12:21:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:01.962 12:21:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:01.962 12:21:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:01.962 12:21:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:01.962 12:21:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:01.962 12:21:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:01.962 12:21:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:01.962 12:21:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:01.962 12:21:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:01.962 12:21:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:01.962 12:21:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:01.962 12:21:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:01.962 12:21:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.962 12:21:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:01.962 12:21:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.220 12:21:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:02.220 "name": "raid_bdev1", 00:23:02.220 "uuid": "c687ac72-c7c6-4598-becc-46ea31bf4a00", 00:23:02.220 "strip_size_kb": 0, 00:23:02.220 "state": "online", 00:23:02.220 "raid_level": "raid1", 00:23:02.220 "superblock": true, 00:23:02.220 "num_base_bdevs": 2, 00:23:02.220 "num_base_bdevs_discovered": 1, 00:23:02.220 "num_base_bdevs_operational": 1, 00:23:02.220 "base_bdevs_list": [ 00:23:02.220 { 00:23:02.220 "name": null, 00:23:02.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:02.220 "is_configured": false, 00:23:02.220 "data_offset": 0, 00:23:02.220 "data_size": 7936 00:23:02.220 }, 00:23:02.220 { 00:23:02.220 "name": "BaseBdev2", 00:23:02.220 "uuid": "2fa20f7a-9422-5439-85c3-3a57c8bdaa43", 00:23:02.220 "is_configured": true, 00:23:02.220 "data_offset": 256, 00:23:02.220 "data_size": 7936 00:23:02.220 } 00:23:02.220 ] 00:23:02.220 }' 00:23:02.220 12:21:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:02.220 12:21:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:02.786 12:21:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:02.786 12:21:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.786 12:21:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:02.786 [2024-11-25 12:21:58.576002] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:02.786 [2024-11-25 12:21:58.576091] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:02.786 [2024-11-25 12:21:58.576130] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:23:02.786 [2024-11-25 12:21:58.576167] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:02.786 [2024-11-25 12:21:58.576800] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:02.786 [2024-11-25 12:21:58.576843] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:02.786 [2024-11-25 12:21:58.576963] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:02.786 [2024-11-25 12:21:58.576987] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:02.786 [2024-11-25 12:21:58.577001] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:02.786 [2024-11-25 12:21:58.577048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:02.786 [2024-11-25 12:21:58.593147] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:23:02.786 spare 00:23:02.786 12:21:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.786 12:21:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:23:02.786 [2024-11-25 12:21:58.595714] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:03.721 12:21:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:03.721 12:21:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:03.721 12:21:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:03.721 12:21:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:03.721 12:21:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:03.721 12:21:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:03.721 12:21:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.721 12:21:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:03.721 12:21:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:03.721 12:21:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.721 12:21:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:03.721 "name": "raid_bdev1", 00:23:03.721 "uuid": "c687ac72-c7c6-4598-becc-46ea31bf4a00", 00:23:03.721 "strip_size_kb": 0, 00:23:03.721 "state": "online", 00:23:03.721 "raid_level": "raid1", 00:23:03.721 "superblock": true, 00:23:03.721 "num_base_bdevs": 2, 00:23:03.721 "num_base_bdevs_discovered": 2, 00:23:03.721 "num_base_bdevs_operational": 2, 00:23:03.721 "process": { 00:23:03.721 "type": "rebuild", 00:23:03.721 "target": "spare", 00:23:03.721 "progress": { 00:23:03.721 "blocks": 2560, 00:23:03.721 "percent": 32 00:23:03.721 } 00:23:03.721 }, 00:23:03.721 "base_bdevs_list": [ 00:23:03.721 { 00:23:03.721 "name": "spare", 00:23:03.721 "uuid": "b28cffba-3467-517d-80fe-eb02f196eb88", 00:23:03.721 "is_configured": true, 00:23:03.721 "data_offset": 256, 00:23:03.721 "data_size": 7936 00:23:03.721 }, 00:23:03.721 { 00:23:03.721 "name": "BaseBdev2", 00:23:03.721 "uuid": "2fa20f7a-9422-5439-85c3-3a57c8bdaa43", 00:23:03.721 "is_configured": true, 00:23:03.721 "data_offset": 256, 00:23:03.721 "data_size": 7936 00:23:03.721 } 00:23:03.721 ] 00:23:03.721 }' 00:23:03.721 12:21:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:03.721 12:21:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:03.721 12:21:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:03.721 12:21:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:03.721 12:21:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:23:03.721 12:21:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.721 12:21:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:03.721 [2024-11-25 12:21:59.768710] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:03.721 [2024-11-25 12:21:59.804356] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:03.721 [2024-11-25 12:21:59.804428] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:03.721 [2024-11-25 12:21:59.804455] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:03.721 [2024-11-25 12:21:59.804467] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:03.980 12:21:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.980 12:21:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:03.980 12:21:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:03.980 12:21:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:03.980 12:21:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:03.980 12:21:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:03.980 12:21:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:03.980 12:21:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:03.980 12:21:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:03.980 12:21:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:03.980 12:21:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:03.980 12:21:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:03.980 12:21:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:03.980 12:21:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.980 12:21:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:03.980 12:21:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.980 12:21:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:03.980 "name": "raid_bdev1", 00:23:03.980 "uuid": "c687ac72-c7c6-4598-becc-46ea31bf4a00", 00:23:03.980 "strip_size_kb": 0, 00:23:03.980 "state": "online", 00:23:03.980 "raid_level": "raid1", 00:23:03.980 "superblock": true, 00:23:03.980 "num_base_bdevs": 2, 00:23:03.980 "num_base_bdevs_discovered": 1, 00:23:03.980 "num_base_bdevs_operational": 1, 00:23:03.980 "base_bdevs_list": [ 00:23:03.980 { 00:23:03.980 "name": null, 00:23:03.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:03.980 "is_configured": false, 00:23:03.980 "data_offset": 0, 00:23:03.980 "data_size": 7936 00:23:03.980 }, 00:23:03.980 { 00:23:03.980 "name": "BaseBdev2", 00:23:03.980 "uuid": "2fa20f7a-9422-5439-85c3-3a57c8bdaa43", 00:23:03.980 "is_configured": true, 00:23:03.980 "data_offset": 256, 00:23:03.980 "data_size": 7936 00:23:03.980 } 00:23:03.980 ] 00:23:03.980 }' 00:23:03.980 12:21:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:03.980 12:21:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:04.548 12:22:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:04.548 12:22:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:04.548 12:22:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:04.548 12:22:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:04.548 12:22:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:04.548 12:22:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:04.548 12:22:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.548 12:22:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:04.548 12:22:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:04.548 12:22:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.548 12:22:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:04.548 "name": "raid_bdev1", 00:23:04.548 "uuid": "c687ac72-c7c6-4598-becc-46ea31bf4a00", 00:23:04.548 "strip_size_kb": 0, 00:23:04.548 "state": "online", 00:23:04.548 "raid_level": "raid1", 00:23:04.548 "superblock": true, 00:23:04.548 "num_base_bdevs": 2, 00:23:04.548 "num_base_bdevs_discovered": 1, 00:23:04.548 "num_base_bdevs_operational": 1, 00:23:04.548 "base_bdevs_list": [ 00:23:04.548 { 00:23:04.548 "name": null, 00:23:04.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:04.548 "is_configured": false, 00:23:04.548 "data_offset": 0, 00:23:04.548 "data_size": 7936 00:23:04.549 }, 00:23:04.549 { 00:23:04.549 "name": "BaseBdev2", 00:23:04.549 "uuid": "2fa20f7a-9422-5439-85c3-3a57c8bdaa43", 00:23:04.549 "is_configured": true, 00:23:04.549 "data_offset": 256, 00:23:04.549 "data_size": 7936 00:23:04.549 } 00:23:04.549 ] 00:23:04.549 }' 00:23:04.549 12:22:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:04.549 12:22:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:04.549 12:22:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:04.549 12:22:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:04.549 12:22:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:23:04.549 12:22:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.549 12:22:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:04.549 12:22:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.549 12:22:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:04.549 12:22:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.549 12:22:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:04.549 [2024-11-25 12:22:00.531739] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:04.549 [2024-11-25 12:22:00.531806] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:04.549 [2024-11-25 12:22:00.531839] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:23:04.549 [2024-11-25 12:22:00.531866] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:04.549 [2024-11-25 12:22:00.532436] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:04.549 [2024-11-25 12:22:00.532473] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:04.549 [2024-11-25 12:22:00.532574] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:23:04.549 [2024-11-25 12:22:00.532597] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:04.549 [2024-11-25 12:22:00.532613] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:04.549 [2024-11-25 12:22:00.532627] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:23:04.549 BaseBdev1 00:23:04.549 12:22:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.549 12:22:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:23:05.484 12:22:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:05.485 12:22:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:05.485 12:22:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:05.485 12:22:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:05.485 12:22:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:05.485 12:22:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:05.485 12:22:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:05.485 12:22:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:05.485 12:22:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:05.485 12:22:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:05.485 12:22:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:05.485 12:22:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:05.485 12:22:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.485 12:22:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:05.485 12:22:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.743 12:22:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:05.743 "name": "raid_bdev1", 00:23:05.743 "uuid": "c687ac72-c7c6-4598-becc-46ea31bf4a00", 00:23:05.743 "strip_size_kb": 0, 00:23:05.743 "state": "online", 00:23:05.743 "raid_level": "raid1", 00:23:05.743 "superblock": true, 00:23:05.743 "num_base_bdevs": 2, 00:23:05.743 "num_base_bdevs_discovered": 1, 00:23:05.743 "num_base_bdevs_operational": 1, 00:23:05.743 "base_bdevs_list": [ 00:23:05.743 { 00:23:05.743 "name": null, 00:23:05.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:05.743 "is_configured": false, 00:23:05.743 "data_offset": 0, 00:23:05.743 "data_size": 7936 00:23:05.743 }, 00:23:05.743 { 00:23:05.743 "name": "BaseBdev2", 00:23:05.743 "uuid": "2fa20f7a-9422-5439-85c3-3a57c8bdaa43", 00:23:05.743 "is_configured": true, 00:23:05.743 "data_offset": 256, 00:23:05.743 "data_size": 7936 00:23:05.743 } 00:23:05.743 ] 00:23:05.743 }' 00:23:05.743 12:22:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:05.743 12:22:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:06.002 12:22:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:06.002 12:22:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:06.002 12:22:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:06.002 12:22:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:06.002 12:22:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:06.002 12:22:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:06.002 12:22:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:06.002 12:22:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.002 12:22:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:06.002 12:22:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.002 12:22:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:06.002 "name": "raid_bdev1", 00:23:06.002 "uuid": "c687ac72-c7c6-4598-becc-46ea31bf4a00", 00:23:06.002 "strip_size_kb": 0, 00:23:06.002 "state": "online", 00:23:06.002 "raid_level": "raid1", 00:23:06.002 "superblock": true, 00:23:06.002 "num_base_bdevs": 2, 00:23:06.002 "num_base_bdevs_discovered": 1, 00:23:06.002 "num_base_bdevs_operational": 1, 00:23:06.002 "base_bdevs_list": [ 00:23:06.002 { 00:23:06.002 "name": null, 00:23:06.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:06.002 "is_configured": false, 00:23:06.002 "data_offset": 0, 00:23:06.002 "data_size": 7936 00:23:06.002 }, 00:23:06.002 { 00:23:06.002 "name": "BaseBdev2", 00:23:06.002 "uuid": "2fa20f7a-9422-5439-85c3-3a57c8bdaa43", 00:23:06.002 "is_configured": true, 00:23:06.002 "data_offset": 256, 00:23:06.002 "data_size": 7936 00:23:06.002 } 00:23:06.002 ] 00:23:06.002 }' 00:23:06.262 12:22:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:06.262 12:22:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:06.262 12:22:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:06.262 12:22:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:06.262 12:22:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:06.262 12:22:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:23:06.262 12:22:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:06.262 12:22:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:06.262 12:22:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:06.262 12:22:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:06.262 12:22:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:06.262 12:22:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:06.262 12:22:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.262 12:22:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:06.262 [2024-11-25 12:22:02.184467] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:06.262 [2024-11-25 12:22:02.184677] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:06.262 [2024-11-25 12:22:02.184713] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:06.262 request: 00:23:06.262 { 00:23:06.262 "base_bdev": "BaseBdev1", 00:23:06.262 "raid_bdev": "raid_bdev1", 00:23:06.262 "method": "bdev_raid_add_base_bdev", 00:23:06.262 "req_id": 1 00:23:06.262 } 00:23:06.262 Got JSON-RPC error response 00:23:06.262 response: 00:23:06.262 { 00:23:06.262 "code": -22, 00:23:06.262 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:23:06.262 } 00:23:06.262 12:22:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:06.262 12:22:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:23:06.262 12:22:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:06.262 12:22:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:06.262 12:22:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:06.262 12:22:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:23:07.198 12:22:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:07.198 12:22:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:07.198 12:22:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:07.198 12:22:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:07.198 12:22:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:07.198 12:22:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:07.198 12:22:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:07.198 12:22:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:07.198 12:22:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:07.198 12:22:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:07.198 12:22:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:07.198 12:22:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.198 12:22:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:07.198 12:22:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:07.198 12:22:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.198 12:22:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:07.198 "name": "raid_bdev1", 00:23:07.198 "uuid": "c687ac72-c7c6-4598-becc-46ea31bf4a00", 00:23:07.198 "strip_size_kb": 0, 00:23:07.198 "state": "online", 00:23:07.198 "raid_level": "raid1", 00:23:07.198 "superblock": true, 00:23:07.198 "num_base_bdevs": 2, 00:23:07.198 "num_base_bdevs_discovered": 1, 00:23:07.198 "num_base_bdevs_operational": 1, 00:23:07.198 "base_bdevs_list": [ 00:23:07.198 { 00:23:07.198 "name": null, 00:23:07.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:07.198 "is_configured": false, 00:23:07.198 "data_offset": 0, 00:23:07.198 "data_size": 7936 00:23:07.198 }, 00:23:07.198 { 00:23:07.198 "name": "BaseBdev2", 00:23:07.198 "uuid": "2fa20f7a-9422-5439-85c3-3a57c8bdaa43", 00:23:07.198 "is_configured": true, 00:23:07.198 "data_offset": 256, 00:23:07.198 "data_size": 7936 00:23:07.198 } 00:23:07.198 ] 00:23:07.198 }' 00:23:07.198 12:22:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:07.198 12:22:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:07.767 12:22:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:07.767 12:22:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:07.767 12:22:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:07.767 12:22:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:07.767 12:22:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:07.767 12:22:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:07.767 12:22:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.767 12:22:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:07.767 12:22:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:07.767 12:22:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.767 12:22:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:07.767 "name": "raid_bdev1", 00:23:07.767 "uuid": "c687ac72-c7c6-4598-becc-46ea31bf4a00", 00:23:07.767 "strip_size_kb": 0, 00:23:07.767 "state": "online", 00:23:07.767 "raid_level": "raid1", 00:23:07.767 "superblock": true, 00:23:07.767 "num_base_bdevs": 2, 00:23:07.767 "num_base_bdevs_discovered": 1, 00:23:07.767 "num_base_bdevs_operational": 1, 00:23:07.767 "base_bdevs_list": [ 00:23:07.767 { 00:23:07.767 "name": null, 00:23:07.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:07.767 "is_configured": false, 00:23:07.767 "data_offset": 0, 00:23:07.767 "data_size": 7936 00:23:07.767 }, 00:23:07.767 { 00:23:07.767 "name": "BaseBdev2", 00:23:07.767 "uuid": "2fa20f7a-9422-5439-85c3-3a57c8bdaa43", 00:23:07.767 "is_configured": true, 00:23:07.767 "data_offset": 256, 00:23:07.767 "data_size": 7936 00:23:07.767 } 00:23:07.767 ] 00:23:07.767 }' 00:23:07.767 12:22:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:07.767 12:22:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:07.767 12:22:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:08.027 12:22:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:08.027 12:22:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86920 00:23:08.027 12:22:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86920 ']' 00:23:08.027 12:22:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86920 00:23:08.027 12:22:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:23:08.027 12:22:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:08.027 12:22:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86920 00:23:08.027 12:22:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:08.027 12:22:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:08.027 killing process with pid 86920 00:23:08.027 12:22:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86920' 00:23:08.027 Received shutdown signal, test time was about 60.000000 seconds 00:23:08.027 00:23:08.027 Latency(us) 00:23:08.027 [2024-11-25T12:22:04.118Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.027 [2024-11-25T12:22:04.118Z] =================================================================================================================== 00:23:08.027 [2024-11-25T12:22:04.118Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:08.027 12:22:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86920 00:23:08.027 [2024-11-25 12:22:03.930488] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:08.027 12:22:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86920 00:23:08.027 [2024-11-25 12:22:03.930661] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:08.028 [2024-11-25 12:22:03.930730] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:08.028 [2024-11-25 12:22:03.930755] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:23:08.287 [2024-11-25 12:22:04.197387] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:09.224 12:22:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:23:09.224 00:23:09.224 real 0m21.642s 00:23:09.224 user 0m29.417s 00:23:09.224 sys 0m2.483s 00:23:09.224 12:22:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:09.224 12:22:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:09.224 ************************************ 00:23:09.224 END TEST raid_rebuild_test_sb_4k 00:23:09.224 ************************************ 00:23:09.224 12:22:05 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:23:09.224 12:22:05 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:23:09.224 12:22:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:23:09.224 12:22:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:09.224 12:22:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:09.224 ************************************ 00:23:09.224 START TEST raid_state_function_test_sb_md_separate 00:23:09.224 ************************************ 00:23:09.224 12:22:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:23:09.224 12:22:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:23:09.224 12:22:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:23:09.224 12:22:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:23:09.224 12:22:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:23:09.224 12:22:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:23:09.224 12:22:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:09.224 12:22:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:23:09.224 12:22:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:09.224 12:22:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:09.224 12:22:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:23:09.224 12:22:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:09.224 12:22:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:09.224 12:22:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:09.224 12:22:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:23:09.224 12:22:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:23:09.224 12:22:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:23:09.224 12:22:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:23:09.224 12:22:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:23:09.224 12:22:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:23:09.224 12:22:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:23:09.224 12:22:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:23:09.224 12:22:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:23:09.224 12:22:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87625 00:23:09.224 Process raid pid: 87625 00:23:09.224 12:22:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87625' 00:23:09.224 12:22:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:23:09.224 12:22:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87625 00:23:09.224 12:22:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87625 ']' 00:23:09.224 12:22:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:09.224 12:22:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:09.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:09.224 12:22:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:09.224 12:22:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:09.224 12:22:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:09.482 [2024-11-25 12:22:05.378758] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:23:09.483 [2024-11-25 12:22:05.378949] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:09.741 [2024-11-25 12:22:05.578262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.742 [2024-11-25 12:22:05.734624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:10.000 [2024-11-25 12:22:05.955462] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:10.000 [2024-11-25 12:22:05.955520] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:10.567 12:22:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:10.567 12:22:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:23:10.567 12:22:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:23:10.568 12:22:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.568 12:22:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:10.568 [2024-11-25 12:22:06.363505] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:10.568 [2024-11-25 12:22:06.363572] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:10.568 [2024-11-25 12:22:06.363590] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:10.568 [2024-11-25 12:22:06.363607] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:10.568 12:22:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.568 12:22:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:23:10.568 12:22:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:10.568 12:22:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:10.568 12:22:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:10.568 12:22:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:10.568 12:22:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:10.568 12:22:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:10.568 12:22:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:10.568 12:22:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:10.568 12:22:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:10.568 12:22:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:10.568 12:22:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:10.568 12:22:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.568 12:22:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:10.568 12:22:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.568 12:22:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:10.568 "name": "Existed_Raid", 00:23:10.568 "uuid": "c5f4ace3-8a44-48da-a3e2-af350d5cf610", 00:23:10.568 "strip_size_kb": 0, 00:23:10.568 "state": "configuring", 00:23:10.568 "raid_level": "raid1", 00:23:10.568 "superblock": true, 00:23:10.568 "num_base_bdevs": 2, 00:23:10.568 "num_base_bdevs_discovered": 0, 00:23:10.568 "num_base_bdevs_operational": 2, 00:23:10.568 "base_bdevs_list": [ 00:23:10.568 { 00:23:10.568 "name": "BaseBdev1", 00:23:10.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:10.568 "is_configured": false, 00:23:10.568 "data_offset": 0, 00:23:10.568 "data_size": 0 00:23:10.568 }, 00:23:10.568 { 00:23:10.568 "name": "BaseBdev2", 00:23:10.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:10.568 "is_configured": false, 00:23:10.568 "data_offset": 0, 00:23:10.568 "data_size": 0 00:23:10.568 } 00:23:10.568 ] 00:23:10.568 }' 00:23:10.568 12:22:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:10.568 12:22:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:10.826 12:22:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:10.826 12:22:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.826 12:22:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:10.826 [2024-11-25 12:22:06.884050] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:10.826 [2024-11-25 12:22:06.884101] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:23:10.826 12:22:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.826 12:22:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:23:10.827 12:22:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.827 12:22:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:10.827 [2024-11-25 12:22:06.892024] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:10.827 [2024-11-25 12:22:06.892078] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:10.827 [2024-11-25 12:22:06.892094] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:10.827 [2024-11-25 12:22:06.892112] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:10.827 12:22:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.827 12:22:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:23:10.827 12:22:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.827 12:22:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:11.086 [2024-11-25 12:22:06.937853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:11.086 BaseBdev1 00:23:11.086 12:22:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.086 12:22:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:23:11.086 12:22:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:23:11.086 12:22:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:11.086 12:22:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:23:11.086 12:22:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:11.086 12:22:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:11.086 12:22:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:11.086 12:22:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.086 12:22:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:11.086 12:22:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.086 12:22:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:11.086 12:22:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.086 12:22:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:11.086 [ 00:23:11.086 { 00:23:11.086 "name": "BaseBdev1", 00:23:11.086 "aliases": [ 00:23:11.086 "2320db10-b801-4ff6-a263-27d6540b2e62" 00:23:11.086 ], 00:23:11.086 "product_name": "Malloc disk", 00:23:11.086 "block_size": 4096, 00:23:11.086 "num_blocks": 8192, 00:23:11.086 "uuid": "2320db10-b801-4ff6-a263-27d6540b2e62", 00:23:11.086 "md_size": 32, 00:23:11.086 "md_interleave": false, 00:23:11.086 "dif_type": 0, 00:23:11.086 "assigned_rate_limits": { 00:23:11.086 "rw_ios_per_sec": 0, 00:23:11.086 "rw_mbytes_per_sec": 0, 00:23:11.086 "r_mbytes_per_sec": 0, 00:23:11.086 "w_mbytes_per_sec": 0 00:23:11.086 }, 00:23:11.086 "claimed": true, 00:23:11.086 "claim_type": "exclusive_write", 00:23:11.086 "zoned": false, 00:23:11.086 "supported_io_types": { 00:23:11.086 "read": true, 00:23:11.086 "write": true, 00:23:11.086 "unmap": true, 00:23:11.086 "flush": true, 00:23:11.086 "reset": true, 00:23:11.086 "nvme_admin": false, 00:23:11.086 "nvme_io": false, 00:23:11.086 "nvme_io_md": false, 00:23:11.086 "write_zeroes": true, 00:23:11.086 "zcopy": true, 00:23:11.086 "get_zone_info": false, 00:23:11.086 "zone_management": false, 00:23:11.086 "zone_append": false, 00:23:11.086 "compare": false, 00:23:11.086 "compare_and_write": false, 00:23:11.086 "abort": true, 00:23:11.086 "seek_hole": false, 00:23:11.086 "seek_data": false, 00:23:11.086 "copy": true, 00:23:11.086 "nvme_iov_md": false 00:23:11.086 }, 00:23:11.086 "memory_domains": [ 00:23:11.086 { 00:23:11.086 "dma_device_id": "system", 00:23:11.086 "dma_device_type": 1 00:23:11.086 }, 00:23:11.086 { 00:23:11.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:11.087 "dma_device_type": 2 00:23:11.087 } 00:23:11.087 ], 00:23:11.087 "driver_specific": {} 00:23:11.087 } 00:23:11.087 ] 00:23:11.087 12:22:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.087 12:22:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:23:11.087 12:22:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:23:11.087 12:22:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:11.087 12:22:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:11.087 12:22:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:11.087 12:22:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:11.087 12:22:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:11.087 12:22:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:11.087 12:22:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:11.087 12:22:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:11.087 12:22:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:11.087 12:22:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:11.087 12:22:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:11.087 12:22:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.087 12:22:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:11.087 12:22:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.087 12:22:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:11.087 "name": "Existed_Raid", 00:23:11.087 "uuid": "7e4290af-2914-4ed7-92c8-f328b2739795", 00:23:11.087 "strip_size_kb": 0, 00:23:11.087 "state": "configuring", 00:23:11.087 "raid_level": "raid1", 00:23:11.087 "superblock": true, 00:23:11.087 "num_base_bdevs": 2, 00:23:11.087 "num_base_bdevs_discovered": 1, 00:23:11.087 "num_base_bdevs_operational": 2, 00:23:11.087 "base_bdevs_list": [ 00:23:11.087 { 00:23:11.087 "name": "BaseBdev1", 00:23:11.087 "uuid": "2320db10-b801-4ff6-a263-27d6540b2e62", 00:23:11.087 "is_configured": true, 00:23:11.087 "data_offset": 256, 00:23:11.087 "data_size": 7936 00:23:11.087 }, 00:23:11.087 { 00:23:11.087 "name": "BaseBdev2", 00:23:11.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:11.087 "is_configured": false, 00:23:11.087 "data_offset": 0, 00:23:11.087 "data_size": 0 00:23:11.087 } 00:23:11.087 ] 00:23:11.087 }' 00:23:11.087 12:22:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:11.087 12:22:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:11.654 12:22:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:11.654 12:22:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.654 12:22:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:11.654 [2024-11-25 12:22:07.462076] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:11.654 [2024-11-25 12:22:07.462144] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:23:11.654 12:22:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.654 12:22:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:23:11.654 12:22:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.654 12:22:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:11.654 [2024-11-25 12:22:07.470100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:11.654 [2024-11-25 12:22:07.472543] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:11.654 [2024-11-25 12:22:07.472609] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:11.654 12:22:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.654 12:22:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:23:11.654 12:22:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:11.654 12:22:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:23:11.654 12:22:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:11.654 12:22:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:11.654 12:22:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:11.654 12:22:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:11.654 12:22:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:11.654 12:22:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:11.654 12:22:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:11.654 12:22:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:11.654 12:22:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:11.654 12:22:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:11.654 12:22:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.654 12:22:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:11.654 12:22:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:11.654 12:22:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.654 12:22:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:11.654 "name": "Existed_Raid", 00:23:11.654 "uuid": "35731873-6830-4db6-a1b9-71861092fa94", 00:23:11.654 "strip_size_kb": 0, 00:23:11.654 "state": "configuring", 00:23:11.654 "raid_level": "raid1", 00:23:11.654 "superblock": true, 00:23:11.654 "num_base_bdevs": 2, 00:23:11.654 "num_base_bdevs_discovered": 1, 00:23:11.654 "num_base_bdevs_operational": 2, 00:23:11.654 "base_bdevs_list": [ 00:23:11.654 { 00:23:11.654 "name": "BaseBdev1", 00:23:11.654 "uuid": "2320db10-b801-4ff6-a263-27d6540b2e62", 00:23:11.654 "is_configured": true, 00:23:11.654 "data_offset": 256, 00:23:11.654 "data_size": 7936 00:23:11.654 }, 00:23:11.654 { 00:23:11.654 "name": "BaseBdev2", 00:23:11.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:11.654 "is_configured": false, 00:23:11.654 "data_offset": 0, 00:23:11.654 "data_size": 0 00:23:11.654 } 00:23:11.654 ] 00:23:11.654 }' 00:23:11.654 12:22:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:11.654 12:22:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:11.913 12:22:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:23:11.913 12:22:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.913 12:22:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:12.172 [2024-11-25 12:22:08.037781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:12.172 [2024-11-25 12:22:08.038077] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:23:12.172 [2024-11-25 12:22:08.038097] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:23:12.173 [2024-11-25 12:22:08.038195] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:23:12.173 [2024-11-25 12:22:08.038380] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:23:12.173 [2024-11-25 12:22:08.038401] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:23:12.173 BaseBdev2 00:23:12.173 [2024-11-25 12:22:08.038533] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:12.173 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.173 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:23:12.173 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:23:12.173 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:12.173 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:23:12.173 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:12.173 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:12.173 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:12.173 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.173 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:12.173 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.173 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:12.173 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.173 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:12.173 [ 00:23:12.173 { 00:23:12.173 "name": "BaseBdev2", 00:23:12.173 "aliases": [ 00:23:12.173 "a3ded354-d6d6-4aef-9783-8ee660419388" 00:23:12.173 ], 00:23:12.173 "product_name": "Malloc disk", 00:23:12.173 "block_size": 4096, 00:23:12.173 "num_blocks": 8192, 00:23:12.173 "uuid": "a3ded354-d6d6-4aef-9783-8ee660419388", 00:23:12.173 "md_size": 32, 00:23:12.173 "md_interleave": false, 00:23:12.173 "dif_type": 0, 00:23:12.173 "assigned_rate_limits": { 00:23:12.173 "rw_ios_per_sec": 0, 00:23:12.173 "rw_mbytes_per_sec": 0, 00:23:12.173 "r_mbytes_per_sec": 0, 00:23:12.173 "w_mbytes_per_sec": 0 00:23:12.173 }, 00:23:12.173 "claimed": true, 00:23:12.173 "claim_type": "exclusive_write", 00:23:12.173 "zoned": false, 00:23:12.173 "supported_io_types": { 00:23:12.173 "read": true, 00:23:12.173 "write": true, 00:23:12.173 "unmap": true, 00:23:12.173 "flush": true, 00:23:12.173 "reset": true, 00:23:12.173 "nvme_admin": false, 00:23:12.173 "nvme_io": false, 00:23:12.173 "nvme_io_md": false, 00:23:12.173 "write_zeroes": true, 00:23:12.173 "zcopy": true, 00:23:12.173 "get_zone_info": false, 00:23:12.173 "zone_management": false, 00:23:12.173 "zone_append": false, 00:23:12.173 "compare": false, 00:23:12.173 "compare_and_write": false, 00:23:12.173 "abort": true, 00:23:12.173 "seek_hole": false, 00:23:12.173 "seek_data": false, 00:23:12.173 "copy": true, 00:23:12.173 "nvme_iov_md": false 00:23:12.173 }, 00:23:12.173 "memory_domains": [ 00:23:12.173 { 00:23:12.173 "dma_device_id": "system", 00:23:12.173 "dma_device_type": 1 00:23:12.173 }, 00:23:12.173 { 00:23:12.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:12.173 "dma_device_type": 2 00:23:12.173 } 00:23:12.173 ], 00:23:12.173 "driver_specific": {} 00:23:12.173 } 00:23:12.173 ] 00:23:12.173 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.173 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:23:12.173 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:23:12.173 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:12.173 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:23:12.173 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:12.173 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:12.173 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:12.173 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:12.173 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:12.173 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:12.173 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:12.173 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:12.173 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:12.173 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:12.173 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:12.173 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.173 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:12.173 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.173 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:12.173 "name": "Existed_Raid", 00:23:12.173 "uuid": "35731873-6830-4db6-a1b9-71861092fa94", 00:23:12.173 "strip_size_kb": 0, 00:23:12.173 "state": "online", 00:23:12.173 "raid_level": "raid1", 00:23:12.173 "superblock": true, 00:23:12.173 "num_base_bdevs": 2, 00:23:12.173 "num_base_bdevs_discovered": 2, 00:23:12.173 "num_base_bdevs_operational": 2, 00:23:12.173 "base_bdevs_list": [ 00:23:12.173 { 00:23:12.173 "name": "BaseBdev1", 00:23:12.173 "uuid": "2320db10-b801-4ff6-a263-27d6540b2e62", 00:23:12.173 "is_configured": true, 00:23:12.173 "data_offset": 256, 00:23:12.173 "data_size": 7936 00:23:12.173 }, 00:23:12.173 { 00:23:12.173 "name": "BaseBdev2", 00:23:12.173 "uuid": "a3ded354-d6d6-4aef-9783-8ee660419388", 00:23:12.173 "is_configured": true, 00:23:12.173 "data_offset": 256, 00:23:12.173 "data_size": 7936 00:23:12.173 } 00:23:12.173 ] 00:23:12.173 }' 00:23:12.173 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:12.173 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:12.741 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:23:12.741 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:23:12.741 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:12.741 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:12.741 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:23:12.741 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:12.741 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:12.741 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:23:12.741 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.741 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:12.741 [2024-11-25 12:22:08.594421] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:12.741 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.741 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:12.741 "name": "Existed_Raid", 00:23:12.741 "aliases": [ 00:23:12.741 "35731873-6830-4db6-a1b9-71861092fa94" 00:23:12.741 ], 00:23:12.741 "product_name": "Raid Volume", 00:23:12.741 "block_size": 4096, 00:23:12.741 "num_blocks": 7936, 00:23:12.741 "uuid": "35731873-6830-4db6-a1b9-71861092fa94", 00:23:12.741 "md_size": 32, 00:23:12.741 "md_interleave": false, 00:23:12.741 "dif_type": 0, 00:23:12.741 "assigned_rate_limits": { 00:23:12.741 "rw_ios_per_sec": 0, 00:23:12.741 "rw_mbytes_per_sec": 0, 00:23:12.741 "r_mbytes_per_sec": 0, 00:23:12.741 "w_mbytes_per_sec": 0 00:23:12.741 }, 00:23:12.741 "claimed": false, 00:23:12.741 "zoned": false, 00:23:12.741 "supported_io_types": { 00:23:12.741 "read": true, 00:23:12.741 "write": true, 00:23:12.741 "unmap": false, 00:23:12.741 "flush": false, 00:23:12.741 "reset": true, 00:23:12.741 "nvme_admin": false, 00:23:12.741 "nvme_io": false, 00:23:12.741 "nvme_io_md": false, 00:23:12.741 "write_zeroes": true, 00:23:12.741 "zcopy": false, 00:23:12.741 "get_zone_info": false, 00:23:12.741 "zone_management": false, 00:23:12.741 "zone_append": false, 00:23:12.741 "compare": false, 00:23:12.741 "compare_and_write": false, 00:23:12.741 "abort": false, 00:23:12.741 "seek_hole": false, 00:23:12.741 "seek_data": false, 00:23:12.741 "copy": false, 00:23:12.741 "nvme_iov_md": false 00:23:12.741 }, 00:23:12.741 "memory_domains": [ 00:23:12.741 { 00:23:12.741 "dma_device_id": "system", 00:23:12.741 "dma_device_type": 1 00:23:12.741 }, 00:23:12.741 { 00:23:12.741 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:12.741 "dma_device_type": 2 00:23:12.741 }, 00:23:12.741 { 00:23:12.741 "dma_device_id": "system", 00:23:12.741 "dma_device_type": 1 00:23:12.741 }, 00:23:12.741 { 00:23:12.741 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:12.741 "dma_device_type": 2 00:23:12.741 } 00:23:12.741 ], 00:23:12.741 "driver_specific": { 00:23:12.741 "raid": { 00:23:12.741 "uuid": "35731873-6830-4db6-a1b9-71861092fa94", 00:23:12.741 "strip_size_kb": 0, 00:23:12.741 "state": "online", 00:23:12.741 "raid_level": "raid1", 00:23:12.742 "superblock": true, 00:23:12.742 "num_base_bdevs": 2, 00:23:12.742 "num_base_bdevs_discovered": 2, 00:23:12.742 "num_base_bdevs_operational": 2, 00:23:12.742 "base_bdevs_list": [ 00:23:12.742 { 00:23:12.742 "name": "BaseBdev1", 00:23:12.742 "uuid": "2320db10-b801-4ff6-a263-27d6540b2e62", 00:23:12.742 "is_configured": true, 00:23:12.742 "data_offset": 256, 00:23:12.742 "data_size": 7936 00:23:12.742 }, 00:23:12.742 { 00:23:12.742 "name": "BaseBdev2", 00:23:12.742 "uuid": "a3ded354-d6d6-4aef-9783-8ee660419388", 00:23:12.742 "is_configured": true, 00:23:12.742 "data_offset": 256, 00:23:12.742 "data_size": 7936 00:23:12.742 } 00:23:12.742 ] 00:23:12.742 } 00:23:12.742 } 00:23:12.742 }' 00:23:12.742 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:12.742 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:23:12.742 BaseBdev2' 00:23:12.742 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:12.742 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:23:12.742 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:12.742 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:23:12.742 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:12.742 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.742 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:12.742 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.001 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:23:13.001 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:23:13.001 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:13.001 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:13.001 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:23:13.001 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.001 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:13.001 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.001 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:23:13.001 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:23:13.001 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:23:13.001 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.001 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:13.001 [2024-11-25 12:22:08.902163] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:13.001 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.001 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:23:13.001 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:23:13.001 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:13.001 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:23:13.001 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:23:13.001 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:23:13.001 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:13.001 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:13.001 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:13.001 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:13.001 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:13.001 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:13.001 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:13.001 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:13.001 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:13.001 12:22:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:13.001 12:22:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:13.001 12:22:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.001 12:22:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:13.001 12:22:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.001 12:22:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:13.001 "name": "Existed_Raid", 00:23:13.001 "uuid": "35731873-6830-4db6-a1b9-71861092fa94", 00:23:13.001 "strip_size_kb": 0, 00:23:13.001 "state": "online", 00:23:13.001 "raid_level": "raid1", 00:23:13.001 "superblock": true, 00:23:13.001 "num_base_bdevs": 2, 00:23:13.001 "num_base_bdevs_discovered": 1, 00:23:13.001 "num_base_bdevs_operational": 1, 00:23:13.001 "base_bdevs_list": [ 00:23:13.001 { 00:23:13.001 "name": null, 00:23:13.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:13.001 "is_configured": false, 00:23:13.001 "data_offset": 0, 00:23:13.001 "data_size": 7936 00:23:13.001 }, 00:23:13.001 { 00:23:13.001 "name": "BaseBdev2", 00:23:13.001 "uuid": "a3ded354-d6d6-4aef-9783-8ee660419388", 00:23:13.001 "is_configured": true, 00:23:13.001 "data_offset": 256, 00:23:13.001 "data_size": 7936 00:23:13.001 } 00:23:13.001 ] 00:23:13.001 }' 00:23:13.001 12:22:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:13.001 12:22:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:13.572 12:22:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:23:13.572 12:22:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:13.572 12:22:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:13.572 12:22:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:23:13.572 12:22:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.572 12:22:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:13.572 12:22:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.572 12:22:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:23:13.572 12:22:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:13.572 12:22:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:23:13.572 12:22:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.572 12:22:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:13.572 [2024-11-25 12:22:09.542580] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:13.572 [2024-11-25 12:22:09.542730] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:13.572 [2024-11-25 12:22:09.638254] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:13.572 [2024-11-25 12:22:09.638318] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:13.572 [2024-11-25 12:22:09.638355] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:23:13.572 12:22:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.572 12:22:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:23:13.572 12:22:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:13.572 12:22:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:13.572 12:22:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.572 12:22:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:13.572 12:22:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:23:13.572 12:22:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.862 12:22:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:23:13.862 12:22:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:23:13.862 12:22:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:23:13.862 12:22:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87625 00:23:13.862 12:22:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87625 ']' 00:23:13.862 12:22:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87625 00:23:13.862 12:22:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:23:13.862 12:22:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:13.862 12:22:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87625 00:23:13.862 killing process with pid 87625 00:23:13.862 12:22:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:13.862 12:22:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:13.862 12:22:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87625' 00:23:13.862 12:22:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87625 00:23:13.862 [2024-11-25 12:22:09.733702] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:13.862 12:22:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87625 00:23:13.862 [2024-11-25 12:22:09.748657] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:14.799 ************************************ 00:23:14.799 END TEST raid_state_function_test_sb_md_separate 00:23:14.799 ************************************ 00:23:14.799 12:22:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:23:14.799 00:23:14.799 real 0m5.529s 00:23:14.799 user 0m8.297s 00:23:14.799 sys 0m0.864s 00:23:14.799 12:22:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:14.799 12:22:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:14.799 12:22:10 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:23:14.799 12:22:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:23:14.799 12:22:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:14.799 12:22:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:14.799 ************************************ 00:23:14.799 START TEST raid_superblock_test_md_separate 00:23:14.799 ************************************ 00:23:14.799 12:22:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:23:14.799 12:22:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:23:14.799 12:22:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:23:14.800 12:22:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:23:14.800 12:22:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:23:14.800 12:22:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:23:14.800 12:22:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:23:14.800 12:22:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:23:14.800 12:22:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:23:14.800 12:22:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:23:14.800 12:22:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:23:14.800 12:22:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:23:14.800 12:22:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:23:14.800 12:22:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:23:14.800 12:22:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:23:14.800 12:22:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:23:14.800 12:22:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87878 00:23:14.800 12:22:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:23:14.800 12:22:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87878 00:23:14.800 12:22:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87878 ']' 00:23:14.800 12:22:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:14.800 12:22:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:14.800 12:22:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:14.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:14.800 12:22:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:14.800 12:22:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:15.060 [2024-11-25 12:22:10.957068] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:23:15.060 [2024-11-25 12:22:10.957442] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87878 ] 00:23:15.060 [2024-11-25 12:22:11.131832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.319 [2024-11-25 12:22:11.264396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:15.578 [2024-11-25 12:22:11.470364] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:15.578 [2024-11-25 12:22:11.470671] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:16.147 12:22:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:16.147 12:22:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:23:16.147 12:22:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:23:16.147 12:22:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:16.147 12:22:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:23:16.147 12:22:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:23:16.147 12:22:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:23:16.147 12:22:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:16.147 12:22:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:16.147 12:22:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:16.147 12:22:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:23:16.147 12:22:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.147 12:22:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:16.147 malloc1 00:23:16.147 12:22:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.147 12:22:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:16.147 12:22:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.147 12:22:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:16.147 [2024-11-25 12:22:12.007172] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:16.147 [2024-11-25 12:22:12.007253] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:16.147 [2024-11-25 12:22:12.007287] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:16.147 [2024-11-25 12:22:12.007305] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:16.147 [2024-11-25 12:22:12.009860] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:16.147 [2024-11-25 12:22:12.009905] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:16.147 pt1 00:23:16.147 12:22:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.147 12:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:16.147 12:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:16.147 12:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:23:16.147 12:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:23:16.147 12:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:23:16.147 12:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:16.147 12:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:16.147 12:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:16.147 12:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:23:16.147 12:22:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.147 12:22:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:16.147 malloc2 00:23:16.147 12:22:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.147 12:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:16.147 12:22:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.147 12:22:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:16.147 [2024-11-25 12:22:12.064636] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:16.147 [2024-11-25 12:22:12.064845] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:16.147 [2024-11-25 12:22:12.064893] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:16.147 [2024-11-25 12:22:12.064910] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:16.147 [2024-11-25 12:22:12.067480] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:16.147 [2024-11-25 12:22:12.067528] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:16.147 pt2 00:23:16.147 12:22:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.147 12:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:16.147 12:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:16.148 12:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:23:16.148 12:22:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.148 12:22:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:16.148 [2024-11-25 12:22:12.076671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:16.148 [2024-11-25 12:22:12.079100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:16.148 [2024-11-25 12:22:12.079361] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:16.148 [2024-11-25 12:22:12.079385] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:23:16.148 [2024-11-25 12:22:12.079485] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:23:16.148 [2024-11-25 12:22:12.079651] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:16.148 [2024-11-25 12:22:12.079673] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:23:16.148 [2024-11-25 12:22:12.079805] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:16.148 12:22:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.148 12:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:16.148 12:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:16.148 12:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:16.148 12:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:16.148 12:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:16.148 12:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:16.148 12:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:16.148 12:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:16.148 12:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:16.148 12:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:16.148 12:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:16.148 12:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:16.148 12:22:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.148 12:22:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:16.148 12:22:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.148 12:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:16.148 "name": "raid_bdev1", 00:23:16.148 "uuid": "b87f1431-8900-4711-9d05-629c213e0d93", 00:23:16.148 "strip_size_kb": 0, 00:23:16.148 "state": "online", 00:23:16.148 "raid_level": "raid1", 00:23:16.148 "superblock": true, 00:23:16.148 "num_base_bdevs": 2, 00:23:16.148 "num_base_bdevs_discovered": 2, 00:23:16.148 "num_base_bdevs_operational": 2, 00:23:16.148 "base_bdevs_list": [ 00:23:16.148 { 00:23:16.148 "name": "pt1", 00:23:16.148 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:16.148 "is_configured": true, 00:23:16.148 "data_offset": 256, 00:23:16.148 "data_size": 7936 00:23:16.148 }, 00:23:16.148 { 00:23:16.148 "name": "pt2", 00:23:16.148 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:16.148 "is_configured": true, 00:23:16.148 "data_offset": 256, 00:23:16.148 "data_size": 7936 00:23:16.148 } 00:23:16.148 ] 00:23:16.148 }' 00:23:16.148 12:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:16.148 12:22:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:16.716 12:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:23:16.716 12:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:23:16.716 12:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:16.716 12:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:16.716 12:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:23:16.716 12:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:16.716 12:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:16.716 12:22:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.716 12:22:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:16.716 12:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:16.716 [2024-11-25 12:22:12.593185] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:16.716 12:22:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.716 12:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:16.716 "name": "raid_bdev1", 00:23:16.716 "aliases": [ 00:23:16.716 "b87f1431-8900-4711-9d05-629c213e0d93" 00:23:16.716 ], 00:23:16.716 "product_name": "Raid Volume", 00:23:16.716 "block_size": 4096, 00:23:16.716 "num_blocks": 7936, 00:23:16.716 "uuid": "b87f1431-8900-4711-9d05-629c213e0d93", 00:23:16.716 "md_size": 32, 00:23:16.716 "md_interleave": false, 00:23:16.716 "dif_type": 0, 00:23:16.716 "assigned_rate_limits": { 00:23:16.716 "rw_ios_per_sec": 0, 00:23:16.716 "rw_mbytes_per_sec": 0, 00:23:16.716 "r_mbytes_per_sec": 0, 00:23:16.716 "w_mbytes_per_sec": 0 00:23:16.716 }, 00:23:16.716 "claimed": false, 00:23:16.716 "zoned": false, 00:23:16.716 "supported_io_types": { 00:23:16.716 "read": true, 00:23:16.716 "write": true, 00:23:16.716 "unmap": false, 00:23:16.716 "flush": false, 00:23:16.716 "reset": true, 00:23:16.716 "nvme_admin": false, 00:23:16.716 "nvme_io": false, 00:23:16.716 "nvme_io_md": false, 00:23:16.716 "write_zeroes": true, 00:23:16.716 "zcopy": false, 00:23:16.716 "get_zone_info": false, 00:23:16.716 "zone_management": false, 00:23:16.716 "zone_append": false, 00:23:16.716 "compare": false, 00:23:16.716 "compare_and_write": false, 00:23:16.716 "abort": false, 00:23:16.716 "seek_hole": false, 00:23:16.716 "seek_data": false, 00:23:16.716 "copy": false, 00:23:16.716 "nvme_iov_md": false 00:23:16.716 }, 00:23:16.716 "memory_domains": [ 00:23:16.716 { 00:23:16.716 "dma_device_id": "system", 00:23:16.716 "dma_device_type": 1 00:23:16.716 }, 00:23:16.716 { 00:23:16.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:16.716 "dma_device_type": 2 00:23:16.716 }, 00:23:16.716 { 00:23:16.716 "dma_device_id": "system", 00:23:16.716 "dma_device_type": 1 00:23:16.716 }, 00:23:16.716 { 00:23:16.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:16.716 "dma_device_type": 2 00:23:16.716 } 00:23:16.716 ], 00:23:16.716 "driver_specific": { 00:23:16.716 "raid": { 00:23:16.716 "uuid": "b87f1431-8900-4711-9d05-629c213e0d93", 00:23:16.716 "strip_size_kb": 0, 00:23:16.716 "state": "online", 00:23:16.716 "raid_level": "raid1", 00:23:16.716 "superblock": true, 00:23:16.716 "num_base_bdevs": 2, 00:23:16.716 "num_base_bdevs_discovered": 2, 00:23:16.716 "num_base_bdevs_operational": 2, 00:23:16.716 "base_bdevs_list": [ 00:23:16.716 { 00:23:16.716 "name": "pt1", 00:23:16.716 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:16.716 "is_configured": true, 00:23:16.716 "data_offset": 256, 00:23:16.716 "data_size": 7936 00:23:16.716 }, 00:23:16.716 { 00:23:16.716 "name": "pt2", 00:23:16.716 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:16.716 "is_configured": true, 00:23:16.717 "data_offset": 256, 00:23:16.717 "data_size": 7936 00:23:16.717 } 00:23:16.717 ] 00:23:16.717 } 00:23:16.717 } 00:23:16.717 }' 00:23:16.717 12:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:16.717 12:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:23:16.717 pt2' 00:23:16.717 12:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:16.717 12:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:23:16.717 12:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:16.717 12:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:23:16.717 12:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:16.717 12:22:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.717 12:22:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:16.717 12:22:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.976 12:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:23:16.976 12:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:23:16.976 12:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:16.976 12:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:23:16.976 12:22:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.976 12:22:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:16.976 12:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:16.976 12:22:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.976 12:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:23:16.976 12:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:23:16.976 12:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:16.976 12:22:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.976 12:22:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:16.976 12:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:23:16.976 [2024-11-25 12:22:12.877187] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:16.976 12:22:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.976 12:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b87f1431-8900-4711-9d05-629c213e0d93 00:23:16.976 12:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z b87f1431-8900-4711-9d05-629c213e0d93 ']' 00:23:16.976 12:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:16.976 12:22:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.976 12:22:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:16.976 [2024-11-25 12:22:12.924835] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:16.976 [2024-11-25 12:22:12.924978] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:16.976 [2024-11-25 12:22:12.925102] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:16.976 [2024-11-25 12:22:12.925179] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:16.976 [2024-11-25 12:22:12.925199] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:23:16.976 12:22:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.976 12:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:16.976 12:22:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.976 12:22:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:16.976 12:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:23:16.976 12:22:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.976 12:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:23:16.976 12:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:23:16.976 12:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:16.976 12:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:23:16.976 12:22:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.976 12:22:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:16.976 12:22:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.976 12:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:16.976 12:22:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:23:16.976 12:22:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.976 12:22:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:16.976 12:22:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.976 12:22:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:23:16.976 12:22:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:23:16.976 12:22:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.976 12:22:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:16.976 12:22:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.976 12:22:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:23:16.976 12:22:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:23:16.976 12:22:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:23:16.976 12:22:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:23:16.976 12:22:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:16.976 12:22:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:16.976 12:22:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:16.976 12:22:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:16.976 12:22:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:23:16.976 12:22:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.976 12:22:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:17.235 [2024-11-25 12:22:13.068907] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:23:17.235 [2024-11-25 12:22:13.071421] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:23:17.235 [2024-11-25 12:22:13.071528] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:23:17.235 [2024-11-25 12:22:13.071609] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:23:17.235 [2024-11-25 12:22:13.071636] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:17.235 [2024-11-25 12:22:13.071652] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:23:17.235 request: 00:23:17.235 { 00:23:17.235 "name": "raid_bdev1", 00:23:17.235 "raid_level": "raid1", 00:23:17.235 "base_bdevs": [ 00:23:17.235 "malloc1", 00:23:17.235 "malloc2" 00:23:17.235 ], 00:23:17.235 "superblock": false, 00:23:17.235 "method": "bdev_raid_create", 00:23:17.235 "req_id": 1 00:23:17.235 } 00:23:17.235 Got JSON-RPC error response 00:23:17.235 response: 00:23:17.235 { 00:23:17.235 "code": -17, 00:23:17.235 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:23:17.235 } 00:23:17.235 12:22:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:17.235 12:22:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:23:17.235 12:22:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:17.235 12:22:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:17.235 12:22:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:17.235 12:22:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:23:17.235 12:22:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:17.235 12:22:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.235 12:22:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:17.235 12:22:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.235 12:22:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:23:17.235 12:22:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:23:17.235 12:22:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:17.235 12:22:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.235 12:22:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:17.235 [2024-11-25 12:22:13.124889] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:17.235 [2024-11-25 12:22:13.124968] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:17.235 [2024-11-25 12:22:13.124995] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:23:17.235 [2024-11-25 12:22:13.125016] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:17.235 [2024-11-25 12:22:13.127674] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:17.235 [2024-11-25 12:22:13.127747] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:17.235 [2024-11-25 12:22:13.127817] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:17.235 [2024-11-25 12:22:13.127887] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:17.235 pt1 00:23:17.235 12:22:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.235 12:22:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:23:17.235 12:22:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:17.235 12:22:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:17.235 12:22:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:17.235 12:22:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:17.235 12:22:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:17.235 12:22:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:17.235 12:22:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:17.235 12:22:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:17.235 12:22:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:17.235 12:22:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:17.235 12:22:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:17.235 12:22:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.235 12:22:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:17.235 12:22:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.235 12:22:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:17.235 "name": "raid_bdev1", 00:23:17.235 "uuid": "b87f1431-8900-4711-9d05-629c213e0d93", 00:23:17.235 "strip_size_kb": 0, 00:23:17.235 "state": "configuring", 00:23:17.235 "raid_level": "raid1", 00:23:17.235 "superblock": true, 00:23:17.235 "num_base_bdevs": 2, 00:23:17.235 "num_base_bdevs_discovered": 1, 00:23:17.235 "num_base_bdevs_operational": 2, 00:23:17.235 "base_bdevs_list": [ 00:23:17.235 { 00:23:17.235 "name": "pt1", 00:23:17.235 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:17.235 "is_configured": true, 00:23:17.235 "data_offset": 256, 00:23:17.235 "data_size": 7936 00:23:17.235 }, 00:23:17.235 { 00:23:17.235 "name": null, 00:23:17.235 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:17.235 "is_configured": false, 00:23:17.235 "data_offset": 256, 00:23:17.235 "data_size": 7936 00:23:17.235 } 00:23:17.235 ] 00:23:17.235 }' 00:23:17.235 12:22:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:17.235 12:22:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:17.801 12:22:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:23:17.801 12:22:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:23:17.801 12:22:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:17.801 12:22:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:17.801 12:22:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.801 12:22:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:17.801 [2024-11-25 12:22:13.645036] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:17.801 [2024-11-25 12:22:13.645129] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:17.801 [2024-11-25 12:22:13.645166] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:23:17.801 [2024-11-25 12:22:13.645186] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:17.801 [2024-11-25 12:22:13.645489] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:17.801 [2024-11-25 12:22:13.645521] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:17.802 [2024-11-25 12:22:13.645591] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:17.802 [2024-11-25 12:22:13.645626] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:17.802 [2024-11-25 12:22:13.645766] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:23:17.802 [2024-11-25 12:22:13.645788] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:23:17.802 [2024-11-25 12:22:13.645878] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:23:17.802 [2024-11-25 12:22:13.646025] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:23:17.802 [2024-11-25 12:22:13.646040] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:23:17.802 [2024-11-25 12:22:13.646162] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:17.802 pt2 00:23:17.802 12:22:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.802 12:22:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:23:17.802 12:22:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:17.802 12:22:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:17.802 12:22:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:17.802 12:22:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:17.802 12:22:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:17.802 12:22:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:17.802 12:22:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:17.802 12:22:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:17.802 12:22:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:17.802 12:22:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:17.802 12:22:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:17.802 12:22:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:17.802 12:22:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.802 12:22:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:17.802 12:22:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:17.802 12:22:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.802 12:22:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:17.802 "name": "raid_bdev1", 00:23:17.802 "uuid": "b87f1431-8900-4711-9d05-629c213e0d93", 00:23:17.802 "strip_size_kb": 0, 00:23:17.802 "state": "online", 00:23:17.802 "raid_level": "raid1", 00:23:17.802 "superblock": true, 00:23:17.802 "num_base_bdevs": 2, 00:23:17.802 "num_base_bdevs_discovered": 2, 00:23:17.802 "num_base_bdevs_operational": 2, 00:23:17.802 "base_bdevs_list": [ 00:23:17.802 { 00:23:17.802 "name": "pt1", 00:23:17.802 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:17.802 "is_configured": true, 00:23:17.802 "data_offset": 256, 00:23:17.802 "data_size": 7936 00:23:17.802 }, 00:23:17.802 { 00:23:17.802 "name": "pt2", 00:23:17.802 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:17.802 "is_configured": true, 00:23:17.802 "data_offset": 256, 00:23:17.802 "data_size": 7936 00:23:17.802 } 00:23:17.802 ] 00:23:17.802 }' 00:23:17.802 12:22:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:17.802 12:22:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:18.369 12:22:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:23:18.369 12:22:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:23:18.369 12:22:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:18.369 12:22:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:18.369 12:22:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:23:18.369 12:22:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:18.369 12:22:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:18.369 12:22:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:18.369 12:22:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.369 12:22:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:18.369 [2024-11-25 12:22:14.181533] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:18.369 12:22:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.369 12:22:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:18.369 "name": "raid_bdev1", 00:23:18.369 "aliases": [ 00:23:18.369 "b87f1431-8900-4711-9d05-629c213e0d93" 00:23:18.369 ], 00:23:18.369 "product_name": "Raid Volume", 00:23:18.369 "block_size": 4096, 00:23:18.369 "num_blocks": 7936, 00:23:18.369 "uuid": "b87f1431-8900-4711-9d05-629c213e0d93", 00:23:18.369 "md_size": 32, 00:23:18.369 "md_interleave": false, 00:23:18.369 "dif_type": 0, 00:23:18.369 "assigned_rate_limits": { 00:23:18.369 "rw_ios_per_sec": 0, 00:23:18.369 "rw_mbytes_per_sec": 0, 00:23:18.369 "r_mbytes_per_sec": 0, 00:23:18.369 "w_mbytes_per_sec": 0 00:23:18.369 }, 00:23:18.369 "claimed": false, 00:23:18.369 "zoned": false, 00:23:18.369 "supported_io_types": { 00:23:18.369 "read": true, 00:23:18.369 "write": true, 00:23:18.369 "unmap": false, 00:23:18.369 "flush": false, 00:23:18.369 "reset": true, 00:23:18.369 "nvme_admin": false, 00:23:18.369 "nvme_io": false, 00:23:18.369 "nvme_io_md": false, 00:23:18.369 "write_zeroes": true, 00:23:18.369 "zcopy": false, 00:23:18.369 "get_zone_info": false, 00:23:18.369 "zone_management": false, 00:23:18.369 "zone_append": false, 00:23:18.369 "compare": false, 00:23:18.369 "compare_and_write": false, 00:23:18.369 "abort": false, 00:23:18.369 "seek_hole": false, 00:23:18.369 "seek_data": false, 00:23:18.369 "copy": false, 00:23:18.369 "nvme_iov_md": false 00:23:18.369 }, 00:23:18.369 "memory_domains": [ 00:23:18.369 { 00:23:18.369 "dma_device_id": "system", 00:23:18.369 "dma_device_type": 1 00:23:18.369 }, 00:23:18.369 { 00:23:18.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:18.369 "dma_device_type": 2 00:23:18.369 }, 00:23:18.369 { 00:23:18.369 "dma_device_id": "system", 00:23:18.369 "dma_device_type": 1 00:23:18.369 }, 00:23:18.369 { 00:23:18.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:18.369 "dma_device_type": 2 00:23:18.369 } 00:23:18.369 ], 00:23:18.369 "driver_specific": { 00:23:18.369 "raid": { 00:23:18.369 "uuid": "b87f1431-8900-4711-9d05-629c213e0d93", 00:23:18.369 "strip_size_kb": 0, 00:23:18.369 "state": "online", 00:23:18.369 "raid_level": "raid1", 00:23:18.369 "superblock": true, 00:23:18.369 "num_base_bdevs": 2, 00:23:18.369 "num_base_bdevs_discovered": 2, 00:23:18.369 "num_base_bdevs_operational": 2, 00:23:18.369 "base_bdevs_list": [ 00:23:18.369 { 00:23:18.369 "name": "pt1", 00:23:18.369 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:18.369 "is_configured": true, 00:23:18.369 "data_offset": 256, 00:23:18.369 "data_size": 7936 00:23:18.369 }, 00:23:18.369 { 00:23:18.369 "name": "pt2", 00:23:18.369 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:18.369 "is_configured": true, 00:23:18.369 "data_offset": 256, 00:23:18.369 "data_size": 7936 00:23:18.369 } 00:23:18.369 ] 00:23:18.369 } 00:23:18.369 } 00:23:18.369 }' 00:23:18.369 12:22:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:18.370 12:22:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:23:18.370 pt2' 00:23:18.370 12:22:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:18.370 12:22:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:23:18.370 12:22:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:18.370 12:22:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:23:18.370 12:22:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.370 12:22:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:18.370 12:22:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:18.370 12:22:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.370 12:22:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:23:18.370 12:22:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:23:18.370 12:22:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:18.370 12:22:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:18.370 12:22:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:23:18.370 12:22:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.370 12:22:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:18.370 12:22:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.370 12:22:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:23:18.370 12:22:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:23:18.370 12:22:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:23:18.370 12:22:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:18.370 12:22:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.370 12:22:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:18.370 [2024-11-25 12:22:14.437601] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:18.629 12:22:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.629 12:22:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' b87f1431-8900-4711-9d05-629c213e0d93 '!=' b87f1431-8900-4711-9d05-629c213e0d93 ']' 00:23:18.629 12:22:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:23:18.629 12:22:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:18.629 12:22:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:23:18.629 12:22:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:23:18.629 12:22:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.629 12:22:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:18.629 [2024-11-25 12:22:14.489353] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:23:18.629 12:22:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.629 12:22:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:18.629 12:22:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:18.629 12:22:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:18.629 12:22:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:18.629 12:22:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:18.629 12:22:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:18.629 12:22:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:18.629 12:22:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:18.629 12:22:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:18.629 12:22:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:18.629 12:22:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:18.629 12:22:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:18.629 12:22:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.629 12:22:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:18.629 12:22:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.629 12:22:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:18.629 "name": "raid_bdev1", 00:23:18.629 "uuid": "b87f1431-8900-4711-9d05-629c213e0d93", 00:23:18.629 "strip_size_kb": 0, 00:23:18.629 "state": "online", 00:23:18.629 "raid_level": "raid1", 00:23:18.629 "superblock": true, 00:23:18.629 "num_base_bdevs": 2, 00:23:18.629 "num_base_bdevs_discovered": 1, 00:23:18.629 "num_base_bdevs_operational": 1, 00:23:18.629 "base_bdevs_list": [ 00:23:18.629 { 00:23:18.629 "name": null, 00:23:18.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:18.629 "is_configured": false, 00:23:18.629 "data_offset": 0, 00:23:18.629 "data_size": 7936 00:23:18.629 }, 00:23:18.629 { 00:23:18.629 "name": "pt2", 00:23:18.629 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:18.629 "is_configured": true, 00:23:18.629 "data_offset": 256, 00:23:18.629 "data_size": 7936 00:23:18.629 } 00:23:18.629 ] 00:23:18.629 }' 00:23:18.629 12:22:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:18.629 12:22:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:18.887 12:22:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:18.887 12:22:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.887 12:22:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:18.887 [2024-11-25 12:22:14.961424] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:18.887 [2024-11-25 12:22:14.961460] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:18.887 [2024-11-25 12:22:14.961560] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:18.887 [2024-11-25 12:22:14.961627] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:18.887 [2024-11-25 12:22:14.961646] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:23:18.887 12:22:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.887 12:22:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:23:18.887 12:22:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:18.887 12:22:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.887 12:22:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:19.146 12:22:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.146 12:22:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:23:19.146 12:22:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:23:19.146 12:22:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:23:19.146 12:22:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:23:19.146 12:22:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:23:19.146 12:22:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.146 12:22:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:19.146 12:22:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.146 12:22:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:23:19.146 12:22:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:23:19.146 12:22:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:23:19.146 12:22:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:23:19.146 12:22:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:23:19.146 12:22:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:19.146 12:22:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.146 12:22:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:19.146 [2024-11-25 12:22:15.033414] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:19.146 [2024-11-25 12:22:15.033499] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:19.146 [2024-11-25 12:22:15.033528] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:23:19.146 [2024-11-25 12:22:15.033548] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:19.146 [2024-11-25 12:22:15.036311] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:19.146 [2024-11-25 12:22:15.036389] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:19.146 [2024-11-25 12:22:15.036474] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:19.146 [2024-11-25 12:22:15.036540] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:19.146 [2024-11-25 12:22:15.036661] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:23:19.146 [2024-11-25 12:22:15.036684] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:23:19.146 [2024-11-25 12:22:15.036794] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:23:19.146 [2024-11-25 12:22:15.036936] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:23:19.146 [2024-11-25 12:22:15.036951] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:23:19.146 [2024-11-25 12:22:15.037072] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:19.146 pt2 00:23:19.146 12:22:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.146 12:22:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:19.146 12:22:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:19.146 12:22:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:19.146 12:22:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:19.146 12:22:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:19.146 12:22:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:19.146 12:22:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:19.146 12:22:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:19.146 12:22:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:19.146 12:22:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:19.146 12:22:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:19.146 12:22:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.146 12:22:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:19.146 12:22:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:19.146 12:22:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.146 12:22:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:19.146 "name": "raid_bdev1", 00:23:19.146 "uuid": "b87f1431-8900-4711-9d05-629c213e0d93", 00:23:19.146 "strip_size_kb": 0, 00:23:19.146 "state": "online", 00:23:19.146 "raid_level": "raid1", 00:23:19.146 "superblock": true, 00:23:19.146 "num_base_bdevs": 2, 00:23:19.146 "num_base_bdevs_discovered": 1, 00:23:19.146 "num_base_bdevs_operational": 1, 00:23:19.146 "base_bdevs_list": [ 00:23:19.146 { 00:23:19.146 "name": null, 00:23:19.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:19.146 "is_configured": false, 00:23:19.146 "data_offset": 256, 00:23:19.146 "data_size": 7936 00:23:19.146 }, 00:23:19.146 { 00:23:19.146 "name": "pt2", 00:23:19.146 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:19.146 "is_configured": true, 00:23:19.146 "data_offset": 256, 00:23:19.146 "data_size": 7936 00:23:19.146 } 00:23:19.146 ] 00:23:19.146 }' 00:23:19.146 12:22:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:19.147 12:22:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:19.713 12:22:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:19.713 12:22:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.714 12:22:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:19.714 [2024-11-25 12:22:15.549505] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:19.714 [2024-11-25 12:22:15.549549] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:19.714 [2024-11-25 12:22:15.549642] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:19.714 [2024-11-25 12:22:15.549726] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:19.714 [2024-11-25 12:22:15.549747] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:23:19.714 12:22:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.714 12:22:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:19.714 12:22:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.714 12:22:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:23:19.714 12:22:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:19.714 12:22:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.714 12:22:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:23:19.714 12:22:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:23:19.714 12:22:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:23:19.714 12:22:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:19.714 12:22:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.714 12:22:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:19.714 [2024-11-25 12:22:15.613573] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:19.714 [2024-11-25 12:22:15.613659] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:19.714 [2024-11-25 12:22:15.613693] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:23:19.714 [2024-11-25 12:22:15.613709] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:19.714 [2024-11-25 12:22:15.616374] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:19.714 [2024-11-25 12:22:15.616416] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:19.714 [2024-11-25 12:22:15.616501] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:19.714 [2024-11-25 12:22:15.616560] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:19.714 [2024-11-25 12:22:15.616726] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:23:19.714 [2024-11-25 12:22:15.616744] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:19.714 [2024-11-25 12:22:15.616769] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:23:19.714 [2024-11-25 12:22:15.616846] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:19.714 [2024-11-25 12:22:15.616946] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:23:19.714 [2024-11-25 12:22:15.616961] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:23:19.714 [2024-11-25 12:22:15.617052] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:23:19.714 [2024-11-25 12:22:15.617187] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:23:19.714 [2024-11-25 12:22:15.617206] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:23:19.714 [2024-11-25 12:22:15.617355] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:19.714 pt1 00:23:19.714 12:22:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.714 12:22:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:23:19.714 12:22:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:19.714 12:22:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:19.714 12:22:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:19.714 12:22:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:19.714 12:22:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:19.714 12:22:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:19.714 12:22:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:19.714 12:22:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:19.714 12:22:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:19.714 12:22:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:19.714 12:22:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:19.714 12:22:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:19.714 12:22:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.714 12:22:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:19.714 12:22:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.714 12:22:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:19.714 "name": "raid_bdev1", 00:23:19.714 "uuid": "b87f1431-8900-4711-9d05-629c213e0d93", 00:23:19.714 "strip_size_kb": 0, 00:23:19.714 "state": "online", 00:23:19.714 "raid_level": "raid1", 00:23:19.714 "superblock": true, 00:23:19.714 "num_base_bdevs": 2, 00:23:19.714 "num_base_bdevs_discovered": 1, 00:23:19.714 "num_base_bdevs_operational": 1, 00:23:19.714 "base_bdevs_list": [ 00:23:19.714 { 00:23:19.714 "name": null, 00:23:19.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:19.714 "is_configured": false, 00:23:19.714 "data_offset": 256, 00:23:19.714 "data_size": 7936 00:23:19.714 }, 00:23:19.714 { 00:23:19.714 "name": "pt2", 00:23:19.714 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:19.714 "is_configured": true, 00:23:19.714 "data_offset": 256, 00:23:19.714 "data_size": 7936 00:23:19.714 } 00:23:19.714 ] 00:23:19.714 }' 00:23:19.714 12:22:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:19.714 12:22:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:20.282 12:22:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:23:20.282 12:22:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.282 12:22:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:23:20.282 12:22:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:20.282 12:22:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.282 12:22:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:23:20.282 12:22:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:20.282 12:22:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.282 12:22:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:20.282 12:22:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:23:20.282 [2024-11-25 12:22:16.209998] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:20.282 12:22:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.282 12:22:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' b87f1431-8900-4711-9d05-629c213e0d93 '!=' b87f1431-8900-4711-9d05-629c213e0d93 ']' 00:23:20.282 12:22:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87878 00:23:20.282 12:22:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87878 ']' 00:23:20.282 12:22:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 87878 00:23:20.282 12:22:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:23:20.282 12:22:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:20.282 12:22:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87878 00:23:20.282 killing process with pid 87878 00:23:20.282 12:22:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:20.282 12:22:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:20.282 12:22:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87878' 00:23:20.282 12:22:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 87878 00:23:20.282 [2024-11-25 12:22:16.285750] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:20.282 12:22:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 87878 00:23:20.282 [2024-11-25 12:22:16.285891] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:20.282 [2024-11-25 12:22:16.285956] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:20.282 [2024-11-25 12:22:16.285984] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:23:20.541 [2024-11-25 12:22:16.483073] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:21.531 ************************************ 00:23:21.531 END TEST raid_superblock_test_md_separate 00:23:21.531 ************************************ 00:23:21.531 12:22:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:23:21.531 00:23:21.531 real 0m6.670s 00:23:21.531 user 0m10.538s 00:23:21.531 sys 0m1.001s 00:23:21.531 12:22:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:21.531 12:22:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:21.531 12:22:17 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:23:21.531 12:22:17 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:23:21.531 12:22:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:23:21.531 12:22:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:21.531 12:22:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:21.531 ************************************ 00:23:21.531 START TEST raid_rebuild_test_sb_md_separate 00:23:21.531 ************************************ 00:23:21.531 12:22:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:23:21.531 12:22:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:23:21.531 12:22:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:23:21.532 12:22:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:23:21.532 12:22:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:23:21.532 12:22:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:23:21.532 12:22:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:23:21.532 12:22:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:21.532 12:22:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:23:21.532 12:22:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:21.532 12:22:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:21.532 12:22:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:23:21.532 12:22:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:21.532 12:22:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:21.532 12:22:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:21.532 12:22:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:23:21.532 12:22:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:23:21.532 12:22:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:23:21.532 12:22:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:23:21.532 12:22:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:23:21.532 12:22:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:23:21.532 12:22:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:23:21.532 12:22:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:23:21.532 12:22:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:23:21.532 12:22:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:23:21.532 12:22:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=88206 00:23:21.532 12:22:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 88206 00:23:21.532 12:22:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:21.532 12:22:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 88206 ']' 00:23:21.532 12:22:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:21.532 12:22:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:21.532 12:22:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:21.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:21.532 12:22:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:21.532 12:22:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:21.791 [2024-11-25 12:22:17.715331] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:23:21.791 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:21.791 Zero copy mechanism will not be used. 00:23:21.791 [2024-11-25 12:22:17.715810] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88206 ] 00:23:22.050 [2024-11-25 12:22:17.906584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:22.050 [2024-11-25 12:22:18.063287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:22.309 [2024-11-25 12:22:18.282182] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:22.309 [2024-11-25 12:22:18.282248] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:22.877 12:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:22.877 12:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:23:22.877 12:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:22.877 12:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:23:22.877 12:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.877 12:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:22.877 BaseBdev1_malloc 00:23:22.877 12:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.877 12:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:22.877 12:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.877 12:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:22.877 [2024-11-25 12:22:18.758883] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:22.877 [2024-11-25 12:22:18.759286] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:22.877 [2024-11-25 12:22:18.759380] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:22.877 [2024-11-25 12:22:18.759417] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:22.877 [2024-11-25 12:22:18.762732] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:22.877 [2024-11-25 12:22:18.762802] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:22.877 BaseBdev1 00:23:22.877 12:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.877 12:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:22.877 12:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:23:22.877 12:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.877 12:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:22.877 BaseBdev2_malloc 00:23:22.877 12:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.877 12:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:22.877 12:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.877 12:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:22.877 [2024-11-25 12:22:18.825163] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:22.877 [2024-11-25 12:22:18.825264] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:22.877 [2024-11-25 12:22:18.825301] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:22.877 [2024-11-25 12:22:18.825320] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:22.877 [2024-11-25 12:22:18.827880] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:22.877 [2024-11-25 12:22:18.827932] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:22.877 BaseBdev2 00:23:22.877 12:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.877 12:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:23:22.877 12:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.877 12:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:22.877 spare_malloc 00:23:22.877 12:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.877 12:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:22.877 12:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.877 12:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:22.877 spare_delay 00:23:22.877 12:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.877 12:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:22.877 12:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.877 12:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:22.877 [2024-11-25 12:22:18.902755] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:22.877 [2024-11-25 12:22:18.902856] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:22.877 [2024-11-25 12:22:18.902890] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:23:22.877 [2024-11-25 12:22:18.902909] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:22.877 [2024-11-25 12:22:18.905483] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:22.877 [2024-11-25 12:22:18.905534] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:22.877 spare 00:23:22.877 12:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.877 12:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:23:22.877 12:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.877 12:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:22.877 [2024-11-25 12:22:18.910810] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:22.877 [2024-11-25 12:22:18.913278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:22.877 [2024-11-25 12:22:18.913551] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:22.877 [2024-11-25 12:22:18.913577] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:23:22.877 [2024-11-25 12:22:18.913667] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:23:22.877 [2024-11-25 12:22:18.913838] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:22.877 [2024-11-25 12:22:18.913854] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:23:22.877 [2024-11-25 12:22:18.913989] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:22.877 12:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.877 12:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:22.877 12:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:22.877 12:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:22.877 12:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:22.877 12:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:22.877 12:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:22.877 12:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:22.877 12:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:22.877 12:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:22.877 12:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:22.877 12:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:22.877 12:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:22.877 12:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.877 12:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:22.877 12:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.136 12:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:23.136 "name": "raid_bdev1", 00:23:23.136 "uuid": "c2e1fd8c-d738-46d9-afee-8fafc081acb8", 00:23:23.136 "strip_size_kb": 0, 00:23:23.136 "state": "online", 00:23:23.136 "raid_level": "raid1", 00:23:23.136 "superblock": true, 00:23:23.136 "num_base_bdevs": 2, 00:23:23.136 "num_base_bdevs_discovered": 2, 00:23:23.136 "num_base_bdevs_operational": 2, 00:23:23.136 "base_bdevs_list": [ 00:23:23.136 { 00:23:23.136 "name": "BaseBdev1", 00:23:23.136 "uuid": "71520af2-db7f-507d-9e7f-c1a0173460d9", 00:23:23.136 "is_configured": true, 00:23:23.136 "data_offset": 256, 00:23:23.136 "data_size": 7936 00:23:23.136 }, 00:23:23.136 { 00:23:23.136 "name": "BaseBdev2", 00:23:23.136 "uuid": "0c0a0cba-5e53-5e32-b681-ad575aba048b", 00:23:23.136 "is_configured": true, 00:23:23.136 "data_offset": 256, 00:23:23.136 "data_size": 7936 00:23:23.136 } 00:23:23.136 ] 00:23:23.136 }' 00:23:23.136 12:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:23.136 12:22:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:23.395 12:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:23.395 12:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.395 12:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:23:23.395 12:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:23.395 [2024-11-25 12:22:19.443328] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:23.395 12:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.654 12:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:23:23.654 12:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:23.654 12:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:23.654 12:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.654 12:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:23.654 12:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.654 12:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:23:23.654 12:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:23:23.654 12:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:23:23.654 12:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:23:23.654 12:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:23:23.654 12:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:23:23.654 12:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:23:23.654 12:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:23.654 12:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:23.654 12:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:23.654 12:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:23:23.654 12:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:23.654 12:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:23.654 12:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:23:23.913 [2024-11-25 12:22:19.859128] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:23:23.913 /dev/nbd0 00:23:23.913 12:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:23.913 12:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:23.913 12:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:23:23.913 12:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:23:23.913 12:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:23.913 12:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:23.913 12:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:23:23.913 12:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:23:23.913 12:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:23.913 12:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:23.913 12:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:23.913 1+0 records in 00:23:23.913 1+0 records out 00:23:23.913 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000280206 s, 14.6 MB/s 00:23:23.913 12:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:23.913 12:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:23:23.913 12:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:23.913 12:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:23.914 12:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:23:23.914 12:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:23.914 12:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:23.914 12:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:23:23.914 12:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:23:23.914 12:22:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:23:25.293 7936+0 records in 00:23:25.293 7936+0 records out 00:23:25.293 32505856 bytes (33 MB, 31 MiB) copied, 1.05016 s, 31.0 MB/s 00:23:25.293 12:22:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:23:25.293 12:22:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:23:25.293 12:22:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:25.293 12:22:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:25.293 12:22:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:23:25.293 12:22:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:25.293 12:22:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:23:25.293 [2024-11-25 12:22:21.282923] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:25.293 12:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:25.293 12:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:25.293 12:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:25.293 12:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:25.293 12:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:25.293 12:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:25.293 12:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:23:25.293 12:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:23:25.293 12:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:23:25.293 12:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.293 12:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:25.293 [2024-11-25 12:22:21.319029] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:25.293 12:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.293 12:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:25.293 12:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:25.293 12:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:25.293 12:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:25.293 12:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:25.293 12:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:25.293 12:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:25.293 12:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:25.293 12:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:25.293 12:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:25.293 12:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:25.293 12:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.293 12:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:25.293 12:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:25.293 12:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.293 12:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:25.293 "name": "raid_bdev1", 00:23:25.293 "uuid": "c2e1fd8c-d738-46d9-afee-8fafc081acb8", 00:23:25.293 "strip_size_kb": 0, 00:23:25.293 "state": "online", 00:23:25.293 "raid_level": "raid1", 00:23:25.293 "superblock": true, 00:23:25.293 "num_base_bdevs": 2, 00:23:25.293 "num_base_bdevs_discovered": 1, 00:23:25.293 "num_base_bdevs_operational": 1, 00:23:25.293 "base_bdevs_list": [ 00:23:25.293 { 00:23:25.293 "name": null, 00:23:25.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:25.293 "is_configured": false, 00:23:25.293 "data_offset": 0, 00:23:25.293 "data_size": 7936 00:23:25.293 }, 00:23:25.293 { 00:23:25.293 "name": "BaseBdev2", 00:23:25.293 "uuid": "0c0a0cba-5e53-5e32-b681-ad575aba048b", 00:23:25.293 "is_configured": true, 00:23:25.293 "data_offset": 256, 00:23:25.293 "data_size": 7936 00:23:25.293 } 00:23:25.293 ] 00:23:25.293 }' 00:23:25.293 12:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:25.293 12:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:25.861 12:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:25.861 12:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.861 12:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:25.861 [2024-11-25 12:22:21.843216] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:25.861 [2024-11-25 12:22:21.857105] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:23:25.861 12:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.861 12:22:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:23:25.861 [2024-11-25 12:22:21.859650] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:26.798 12:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:26.798 12:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:26.798 12:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:26.798 12:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:26.798 12:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:26.798 12:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:26.798 12:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:26.798 12:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.798 12:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:26.798 12:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.058 12:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:27.058 "name": "raid_bdev1", 00:23:27.058 "uuid": "c2e1fd8c-d738-46d9-afee-8fafc081acb8", 00:23:27.058 "strip_size_kb": 0, 00:23:27.058 "state": "online", 00:23:27.058 "raid_level": "raid1", 00:23:27.058 "superblock": true, 00:23:27.058 "num_base_bdevs": 2, 00:23:27.058 "num_base_bdevs_discovered": 2, 00:23:27.058 "num_base_bdevs_operational": 2, 00:23:27.058 "process": { 00:23:27.058 "type": "rebuild", 00:23:27.058 "target": "spare", 00:23:27.058 "progress": { 00:23:27.058 "blocks": 2560, 00:23:27.058 "percent": 32 00:23:27.058 } 00:23:27.058 }, 00:23:27.058 "base_bdevs_list": [ 00:23:27.058 { 00:23:27.058 "name": "spare", 00:23:27.058 "uuid": "2a406299-f692-5c62-947a-e1650d06174c", 00:23:27.058 "is_configured": true, 00:23:27.058 "data_offset": 256, 00:23:27.058 "data_size": 7936 00:23:27.058 }, 00:23:27.058 { 00:23:27.058 "name": "BaseBdev2", 00:23:27.058 "uuid": "0c0a0cba-5e53-5e32-b681-ad575aba048b", 00:23:27.058 "is_configured": true, 00:23:27.058 "data_offset": 256, 00:23:27.058 "data_size": 7936 00:23:27.058 } 00:23:27.058 ] 00:23:27.058 }' 00:23:27.058 12:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:27.058 12:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:27.058 12:22:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:27.058 12:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:27.058 12:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:23:27.058 12:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.058 12:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:27.058 [2024-11-25 12:22:23.036912] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:27.058 [2024-11-25 12:22:23.068913] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:27.058 [2024-11-25 12:22:23.069218] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:27.058 [2024-11-25 12:22:23.069247] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:27.058 [2024-11-25 12:22:23.069264] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:27.058 12:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.058 12:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:27.058 12:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:27.058 12:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:27.058 12:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:27.058 12:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:27.058 12:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:27.058 12:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:27.058 12:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:27.058 12:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:27.058 12:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:27.058 12:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:27.058 12:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:27.058 12:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.058 12:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:27.058 12:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.058 12:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:27.058 "name": "raid_bdev1", 00:23:27.058 "uuid": "c2e1fd8c-d738-46d9-afee-8fafc081acb8", 00:23:27.058 "strip_size_kb": 0, 00:23:27.058 "state": "online", 00:23:27.058 "raid_level": "raid1", 00:23:27.058 "superblock": true, 00:23:27.058 "num_base_bdevs": 2, 00:23:27.058 "num_base_bdevs_discovered": 1, 00:23:27.058 "num_base_bdevs_operational": 1, 00:23:27.058 "base_bdevs_list": [ 00:23:27.058 { 00:23:27.058 "name": null, 00:23:27.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:27.058 "is_configured": false, 00:23:27.058 "data_offset": 0, 00:23:27.058 "data_size": 7936 00:23:27.058 }, 00:23:27.058 { 00:23:27.058 "name": "BaseBdev2", 00:23:27.058 "uuid": "0c0a0cba-5e53-5e32-b681-ad575aba048b", 00:23:27.058 "is_configured": true, 00:23:27.058 "data_offset": 256, 00:23:27.058 "data_size": 7936 00:23:27.058 } 00:23:27.058 ] 00:23:27.058 }' 00:23:27.316 12:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:27.316 12:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:27.574 12:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:27.574 12:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:27.574 12:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:27.574 12:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:27.574 12:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:27.574 12:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:27.574 12:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:27.574 12:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.574 12:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:27.574 12:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.574 12:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:27.574 "name": "raid_bdev1", 00:23:27.574 "uuid": "c2e1fd8c-d738-46d9-afee-8fafc081acb8", 00:23:27.574 "strip_size_kb": 0, 00:23:27.574 "state": "online", 00:23:27.574 "raid_level": "raid1", 00:23:27.574 "superblock": true, 00:23:27.574 "num_base_bdevs": 2, 00:23:27.574 "num_base_bdevs_discovered": 1, 00:23:27.574 "num_base_bdevs_operational": 1, 00:23:27.574 "base_bdevs_list": [ 00:23:27.574 { 00:23:27.574 "name": null, 00:23:27.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:27.574 "is_configured": false, 00:23:27.574 "data_offset": 0, 00:23:27.574 "data_size": 7936 00:23:27.574 }, 00:23:27.574 { 00:23:27.574 "name": "BaseBdev2", 00:23:27.574 "uuid": "0c0a0cba-5e53-5e32-b681-ad575aba048b", 00:23:27.574 "is_configured": true, 00:23:27.574 "data_offset": 256, 00:23:27.574 "data_size": 7936 00:23:27.574 } 00:23:27.574 ] 00:23:27.574 }' 00:23:27.574 12:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:27.833 12:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:27.833 12:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:27.833 12:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:27.833 12:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:27.833 12:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.833 12:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:27.833 [2024-11-25 12:22:23.764224] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:27.833 [2024-11-25 12:22:23.777359] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:23:27.833 12:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.833 12:22:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:23:27.833 [2024-11-25 12:22:23.780245] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:28.776 12:22:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:28.776 12:22:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:28.776 12:22:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:28.776 12:22:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:28.776 12:22:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:28.776 12:22:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:28.776 12:22:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.776 12:22:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:28.776 12:22:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:28.776 12:22:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.776 12:22:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:28.776 "name": "raid_bdev1", 00:23:28.776 "uuid": "c2e1fd8c-d738-46d9-afee-8fafc081acb8", 00:23:28.776 "strip_size_kb": 0, 00:23:28.776 "state": "online", 00:23:28.776 "raid_level": "raid1", 00:23:28.776 "superblock": true, 00:23:28.776 "num_base_bdevs": 2, 00:23:28.776 "num_base_bdevs_discovered": 2, 00:23:28.776 "num_base_bdevs_operational": 2, 00:23:28.776 "process": { 00:23:28.776 "type": "rebuild", 00:23:28.776 "target": "spare", 00:23:28.776 "progress": { 00:23:28.776 "blocks": 2560, 00:23:28.776 "percent": 32 00:23:28.776 } 00:23:28.776 }, 00:23:28.776 "base_bdevs_list": [ 00:23:28.776 { 00:23:28.776 "name": "spare", 00:23:28.776 "uuid": "2a406299-f692-5c62-947a-e1650d06174c", 00:23:28.776 "is_configured": true, 00:23:28.776 "data_offset": 256, 00:23:28.776 "data_size": 7936 00:23:28.776 }, 00:23:28.776 { 00:23:28.776 "name": "BaseBdev2", 00:23:28.776 "uuid": "0c0a0cba-5e53-5e32-b681-ad575aba048b", 00:23:28.776 "is_configured": true, 00:23:28.776 "data_offset": 256, 00:23:28.776 "data_size": 7936 00:23:28.776 } 00:23:28.776 ] 00:23:28.776 }' 00:23:28.776 12:22:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:29.035 12:22:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:29.035 12:22:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:29.035 12:22:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:29.035 12:22:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:23:29.035 12:22:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:23:29.035 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:23:29.035 12:22:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:23:29.035 12:22:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:23:29.035 12:22:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:23:29.035 12:22:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=763 00:23:29.035 12:22:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:29.035 12:22:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:29.035 12:22:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:29.035 12:22:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:29.035 12:22:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:29.035 12:22:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:29.035 12:22:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:29.035 12:22:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.035 12:22:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:29.035 12:22:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:29.035 12:22:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.035 12:22:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:29.035 "name": "raid_bdev1", 00:23:29.035 "uuid": "c2e1fd8c-d738-46d9-afee-8fafc081acb8", 00:23:29.035 "strip_size_kb": 0, 00:23:29.035 "state": "online", 00:23:29.035 "raid_level": "raid1", 00:23:29.035 "superblock": true, 00:23:29.035 "num_base_bdevs": 2, 00:23:29.035 "num_base_bdevs_discovered": 2, 00:23:29.035 "num_base_bdevs_operational": 2, 00:23:29.035 "process": { 00:23:29.035 "type": "rebuild", 00:23:29.035 "target": "spare", 00:23:29.035 "progress": { 00:23:29.035 "blocks": 2816, 00:23:29.035 "percent": 35 00:23:29.035 } 00:23:29.035 }, 00:23:29.035 "base_bdevs_list": [ 00:23:29.035 { 00:23:29.035 "name": "spare", 00:23:29.035 "uuid": "2a406299-f692-5c62-947a-e1650d06174c", 00:23:29.035 "is_configured": true, 00:23:29.035 "data_offset": 256, 00:23:29.035 "data_size": 7936 00:23:29.035 }, 00:23:29.035 { 00:23:29.035 "name": "BaseBdev2", 00:23:29.035 "uuid": "0c0a0cba-5e53-5e32-b681-ad575aba048b", 00:23:29.035 "is_configured": true, 00:23:29.035 "data_offset": 256, 00:23:29.035 "data_size": 7936 00:23:29.035 } 00:23:29.035 ] 00:23:29.035 }' 00:23:29.035 12:22:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:29.035 12:22:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:29.035 12:22:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:29.035 12:22:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:29.035 12:22:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:30.412 12:22:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:30.412 12:22:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:30.412 12:22:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:30.412 12:22:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:30.412 12:22:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:30.412 12:22:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:30.412 12:22:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:30.412 12:22:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.412 12:22:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:30.412 12:22:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:30.412 12:22:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.412 12:22:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:30.412 "name": "raid_bdev1", 00:23:30.412 "uuid": "c2e1fd8c-d738-46d9-afee-8fafc081acb8", 00:23:30.412 "strip_size_kb": 0, 00:23:30.412 "state": "online", 00:23:30.412 "raid_level": "raid1", 00:23:30.412 "superblock": true, 00:23:30.412 "num_base_bdevs": 2, 00:23:30.412 "num_base_bdevs_discovered": 2, 00:23:30.412 "num_base_bdevs_operational": 2, 00:23:30.412 "process": { 00:23:30.412 "type": "rebuild", 00:23:30.412 "target": "spare", 00:23:30.412 "progress": { 00:23:30.412 "blocks": 5888, 00:23:30.412 "percent": 74 00:23:30.412 } 00:23:30.412 }, 00:23:30.412 "base_bdevs_list": [ 00:23:30.412 { 00:23:30.412 "name": "spare", 00:23:30.412 "uuid": "2a406299-f692-5c62-947a-e1650d06174c", 00:23:30.412 "is_configured": true, 00:23:30.412 "data_offset": 256, 00:23:30.412 "data_size": 7936 00:23:30.412 }, 00:23:30.412 { 00:23:30.412 "name": "BaseBdev2", 00:23:30.412 "uuid": "0c0a0cba-5e53-5e32-b681-ad575aba048b", 00:23:30.412 "is_configured": true, 00:23:30.412 "data_offset": 256, 00:23:30.412 "data_size": 7936 00:23:30.412 } 00:23:30.412 ] 00:23:30.412 }' 00:23:30.412 12:22:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:30.412 12:22:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:30.412 12:22:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:30.412 12:22:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:30.412 12:22:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:30.980 [2024-11-25 12:22:26.903916] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:30.980 [2024-11-25 12:22:26.904037] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:30.980 [2024-11-25 12:22:26.904209] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:31.239 12:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:31.239 12:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:31.239 12:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:31.239 12:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:31.239 12:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:31.239 12:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:31.239 12:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:31.239 12:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:31.239 12:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.239 12:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:31.239 12:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.498 12:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:31.498 "name": "raid_bdev1", 00:23:31.498 "uuid": "c2e1fd8c-d738-46d9-afee-8fafc081acb8", 00:23:31.498 "strip_size_kb": 0, 00:23:31.498 "state": "online", 00:23:31.498 "raid_level": "raid1", 00:23:31.498 "superblock": true, 00:23:31.498 "num_base_bdevs": 2, 00:23:31.498 "num_base_bdevs_discovered": 2, 00:23:31.498 "num_base_bdevs_operational": 2, 00:23:31.498 "base_bdevs_list": [ 00:23:31.498 { 00:23:31.498 "name": "spare", 00:23:31.498 "uuid": "2a406299-f692-5c62-947a-e1650d06174c", 00:23:31.498 "is_configured": true, 00:23:31.498 "data_offset": 256, 00:23:31.498 "data_size": 7936 00:23:31.498 }, 00:23:31.498 { 00:23:31.498 "name": "BaseBdev2", 00:23:31.498 "uuid": "0c0a0cba-5e53-5e32-b681-ad575aba048b", 00:23:31.498 "is_configured": true, 00:23:31.498 "data_offset": 256, 00:23:31.498 "data_size": 7936 00:23:31.498 } 00:23:31.498 ] 00:23:31.498 }' 00:23:31.498 12:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:31.498 12:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:31.498 12:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:31.498 12:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:23:31.498 12:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:23:31.498 12:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:31.498 12:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:31.498 12:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:31.498 12:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:31.498 12:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:31.498 12:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:31.498 12:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:31.498 12:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.498 12:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:31.498 12:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.498 12:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:31.498 "name": "raid_bdev1", 00:23:31.498 "uuid": "c2e1fd8c-d738-46d9-afee-8fafc081acb8", 00:23:31.498 "strip_size_kb": 0, 00:23:31.498 "state": "online", 00:23:31.498 "raid_level": "raid1", 00:23:31.498 "superblock": true, 00:23:31.498 "num_base_bdevs": 2, 00:23:31.498 "num_base_bdevs_discovered": 2, 00:23:31.498 "num_base_bdevs_operational": 2, 00:23:31.498 "base_bdevs_list": [ 00:23:31.499 { 00:23:31.499 "name": "spare", 00:23:31.499 "uuid": "2a406299-f692-5c62-947a-e1650d06174c", 00:23:31.499 "is_configured": true, 00:23:31.499 "data_offset": 256, 00:23:31.499 "data_size": 7936 00:23:31.499 }, 00:23:31.499 { 00:23:31.499 "name": "BaseBdev2", 00:23:31.499 "uuid": "0c0a0cba-5e53-5e32-b681-ad575aba048b", 00:23:31.499 "is_configured": true, 00:23:31.499 "data_offset": 256, 00:23:31.499 "data_size": 7936 00:23:31.499 } 00:23:31.499 ] 00:23:31.499 }' 00:23:31.499 12:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:31.499 12:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:31.499 12:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:31.758 12:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:31.758 12:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:31.759 12:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:31.759 12:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:31.759 12:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:31.759 12:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:31.759 12:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:31.759 12:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:31.759 12:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:31.759 12:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:31.759 12:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:31.759 12:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:31.759 12:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.759 12:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:31.759 12:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:31.759 12:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.759 12:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:31.759 "name": "raid_bdev1", 00:23:31.759 "uuid": "c2e1fd8c-d738-46d9-afee-8fafc081acb8", 00:23:31.759 "strip_size_kb": 0, 00:23:31.759 "state": "online", 00:23:31.759 "raid_level": "raid1", 00:23:31.759 "superblock": true, 00:23:31.759 "num_base_bdevs": 2, 00:23:31.759 "num_base_bdevs_discovered": 2, 00:23:31.759 "num_base_bdevs_operational": 2, 00:23:31.759 "base_bdevs_list": [ 00:23:31.759 { 00:23:31.759 "name": "spare", 00:23:31.759 "uuid": "2a406299-f692-5c62-947a-e1650d06174c", 00:23:31.759 "is_configured": true, 00:23:31.759 "data_offset": 256, 00:23:31.759 "data_size": 7936 00:23:31.759 }, 00:23:31.759 { 00:23:31.759 "name": "BaseBdev2", 00:23:31.759 "uuid": "0c0a0cba-5e53-5e32-b681-ad575aba048b", 00:23:31.759 "is_configured": true, 00:23:31.759 "data_offset": 256, 00:23:31.759 "data_size": 7936 00:23:31.759 } 00:23:31.759 ] 00:23:31.759 }' 00:23:31.759 12:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:31.759 12:22:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:32.018 12:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:32.018 12:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.018 12:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:32.018 [2024-11-25 12:22:28.098926] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:32.018 [2024-11-25 12:22:28.099207] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:32.018 [2024-11-25 12:22:28.099451] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:32.018 [2024-11-25 12:22:28.099788] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:32.018 [2024-11-25 12:22:28.099817] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:23:32.018 12:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.018 12:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:32.018 12:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.018 12:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:32.278 12:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:23:32.278 12:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.278 12:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:23:32.278 12:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:23:32.278 12:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:23:32.278 12:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:23:32.278 12:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:23:32.278 12:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:23:32.278 12:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:32.278 12:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:32.278 12:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:32.278 12:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:23:32.278 12:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:32.278 12:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:32.278 12:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:23:32.536 /dev/nbd0 00:23:32.536 12:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:32.536 12:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:32.536 12:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:23:32.536 12:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:23:32.536 12:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:32.536 12:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:32.537 12:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:23:32.537 12:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:23:32.537 12:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:32.537 12:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:32.537 12:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:32.537 1+0 records in 00:23:32.537 1+0 records out 00:23:32.537 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00063279 s, 6.5 MB/s 00:23:32.537 12:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:32.537 12:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:23:32.537 12:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:32.537 12:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:32.537 12:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:23:32.537 12:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:32.537 12:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:32.537 12:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:23:32.796 /dev/nbd1 00:23:32.796 12:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:32.796 12:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:32.796 12:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:23:32.796 12:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:23:32.796 12:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:32.796 12:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:32.796 12:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:23:32.796 12:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:23:32.796 12:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:32.796 12:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:32.796 12:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:32.796 1+0 records in 00:23:32.796 1+0 records out 00:23:32.796 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00043145 s, 9.5 MB/s 00:23:33.055 12:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:33.055 12:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:23:33.055 12:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:33.055 12:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:33.055 12:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:23:33.055 12:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:33.055 12:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:33.055 12:22:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:23:33.055 12:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:23:33.055 12:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:23:33.055 12:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:33.055 12:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:33.055 12:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:23:33.055 12:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:33.055 12:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:23:33.325 12:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:33.325 12:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:33.325 12:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:33.325 12:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:33.325 12:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:33.325 12:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:33.325 12:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:23:33.325 12:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:23:33.325 12:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:33.325 12:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:23:33.590 12:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:33.590 12:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:33.590 12:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:33.590 12:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:33.590 12:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:33.590 12:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:33.590 12:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:23:33.590 12:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:23:33.590 12:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:23:33.590 12:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:23:33.590 12:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.590 12:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:33.590 12:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.590 12:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:33.590 12:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.590 12:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:33.590 [2024-11-25 12:22:29.663531] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:33.590 [2024-11-25 12:22:29.663612] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:33.590 [2024-11-25 12:22:29.663646] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:23:33.590 [2024-11-25 12:22:29.663662] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:33.590 [2024-11-25 12:22:29.666463] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:33.590 [2024-11-25 12:22:29.666519] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:33.590 [2024-11-25 12:22:29.666610] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:33.590 [2024-11-25 12:22:29.666677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:33.590 [2024-11-25 12:22:29.666849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:33.590 spare 00:23:33.590 12:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.590 12:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:23:33.590 12:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.590 12:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:33.863 [2024-11-25 12:22:29.766976] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:23:33.863 [2024-11-25 12:22:29.767043] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:23:33.863 [2024-11-25 12:22:29.767219] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:23:33.863 [2024-11-25 12:22:29.767472] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:23:33.863 [2024-11-25 12:22:29.767491] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:23:33.863 [2024-11-25 12:22:29.767674] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:33.863 12:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.863 12:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:33.863 12:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:33.863 12:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:33.863 12:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:33.863 12:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:33.864 12:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:33.864 12:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:33.864 12:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:33.864 12:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:33.864 12:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:33.864 12:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:33.864 12:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:33.864 12:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.864 12:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:33.864 12:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.864 12:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:33.864 "name": "raid_bdev1", 00:23:33.864 "uuid": "c2e1fd8c-d738-46d9-afee-8fafc081acb8", 00:23:33.864 "strip_size_kb": 0, 00:23:33.864 "state": "online", 00:23:33.864 "raid_level": "raid1", 00:23:33.864 "superblock": true, 00:23:33.864 "num_base_bdevs": 2, 00:23:33.864 "num_base_bdevs_discovered": 2, 00:23:33.864 "num_base_bdevs_operational": 2, 00:23:33.864 "base_bdevs_list": [ 00:23:33.864 { 00:23:33.864 "name": "spare", 00:23:33.864 "uuid": "2a406299-f692-5c62-947a-e1650d06174c", 00:23:33.864 "is_configured": true, 00:23:33.864 "data_offset": 256, 00:23:33.864 "data_size": 7936 00:23:33.864 }, 00:23:33.864 { 00:23:33.864 "name": "BaseBdev2", 00:23:33.864 "uuid": "0c0a0cba-5e53-5e32-b681-ad575aba048b", 00:23:33.864 "is_configured": true, 00:23:33.864 "data_offset": 256, 00:23:33.864 "data_size": 7936 00:23:33.864 } 00:23:33.864 ] 00:23:33.864 }' 00:23:33.864 12:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:33.864 12:22:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:34.435 12:22:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:34.435 12:22:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:34.435 12:22:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:34.435 12:22:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:34.435 12:22:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:34.435 12:22:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:34.435 12:22:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:34.435 12:22:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.435 12:22:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:34.435 12:22:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.435 12:22:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:34.435 "name": "raid_bdev1", 00:23:34.435 "uuid": "c2e1fd8c-d738-46d9-afee-8fafc081acb8", 00:23:34.435 "strip_size_kb": 0, 00:23:34.435 "state": "online", 00:23:34.435 "raid_level": "raid1", 00:23:34.435 "superblock": true, 00:23:34.435 "num_base_bdevs": 2, 00:23:34.435 "num_base_bdevs_discovered": 2, 00:23:34.435 "num_base_bdevs_operational": 2, 00:23:34.435 "base_bdevs_list": [ 00:23:34.435 { 00:23:34.435 "name": "spare", 00:23:34.435 "uuid": "2a406299-f692-5c62-947a-e1650d06174c", 00:23:34.435 "is_configured": true, 00:23:34.435 "data_offset": 256, 00:23:34.435 "data_size": 7936 00:23:34.435 }, 00:23:34.435 { 00:23:34.435 "name": "BaseBdev2", 00:23:34.435 "uuid": "0c0a0cba-5e53-5e32-b681-ad575aba048b", 00:23:34.435 "is_configured": true, 00:23:34.435 "data_offset": 256, 00:23:34.435 "data_size": 7936 00:23:34.435 } 00:23:34.435 ] 00:23:34.435 }' 00:23:34.435 12:22:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:34.435 12:22:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:34.435 12:22:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:34.435 12:22:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:34.435 12:22:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:34.436 12:22:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.436 12:22:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:34.436 12:22:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:23:34.436 12:22:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.436 12:22:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:23:34.436 12:22:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:23:34.436 12:22:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.436 12:22:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:34.695 [2024-11-25 12:22:30.528533] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:34.695 12:22:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.695 12:22:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:34.695 12:22:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:34.695 12:22:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:34.695 12:22:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:34.695 12:22:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:34.695 12:22:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:34.695 12:22:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:34.695 12:22:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:34.695 12:22:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:34.695 12:22:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:34.695 12:22:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:34.695 12:22:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:34.695 12:22:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.695 12:22:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:34.695 12:22:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.695 12:22:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:34.695 "name": "raid_bdev1", 00:23:34.695 "uuid": "c2e1fd8c-d738-46d9-afee-8fafc081acb8", 00:23:34.695 "strip_size_kb": 0, 00:23:34.695 "state": "online", 00:23:34.695 "raid_level": "raid1", 00:23:34.695 "superblock": true, 00:23:34.695 "num_base_bdevs": 2, 00:23:34.695 "num_base_bdevs_discovered": 1, 00:23:34.695 "num_base_bdevs_operational": 1, 00:23:34.695 "base_bdevs_list": [ 00:23:34.695 { 00:23:34.695 "name": null, 00:23:34.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:34.695 "is_configured": false, 00:23:34.695 "data_offset": 0, 00:23:34.695 "data_size": 7936 00:23:34.695 }, 00:23:34.695 { 00:23:34.695 "name": "BaseBdev2", 00:23:34.695 "uuid": "0c0a0cba-5e53-5e32-b681-ad575aba048b", 00:23:34.695 "is_configured": true, 00:23:34.695 "data_offset": 256, 00:23:34.695 "data_size": 7936 00:23:34.695 } 00:23:34.695 ] 00:23:34.695 }' 00:23:34.695 12:22:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:34.695 12:22:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:35.264 12:22:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:35.264 12:22:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.264 12:22:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:35.264 [2024-11-25 12:22:31.064716] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:35.264 [2024-11-25 12:22:31.065110] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:35.264 [2024-11-25 12:22:31.065295] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:35.264 [2024-11-25 12:22:31.065388] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:35.264 [2024-11-25 12:22:31.078105] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:23:35.264 12:22:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.264 12:22:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:23:35.264 [2024-11-25 12:22:31.080677] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:36.204 12:22:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:36.204 12:22:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:36.204 12:22:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:36.204 12:22:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:36.204 12:22:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:36.204 12:22:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:36.204 12:22:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:36.204 12:22:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.204 12:22:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:36.204 12:22:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.204 12:22:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:36.204 "name": "raid_bdev1", 00:23:36.204 "uuid": "c2e1fd8c-d738-46d9-afee-8fafc081acb8", 00:23:36.204 "strip_size_kb": 0, 00:23:36.204 "state": "online", 00:23:36.204 "raid_level": "raid1", 00:23:36.204 "superblock": true, 00:23:36.204 "num_base_bdevs": 2, 00:23:36.204 "num_base_bdevs_discovered": 2, 00:23:36.204 "num_base_bdevs_operational": 2, 00:23:36.204 "process": { 00:23:36.204 "type": "rebuild", 00:23:36.204 "target": "spare", 00:23:36.204 "progress": { 00:23:36.204 "blocks": 2560, 00:23:36.204 "percent": 32 00:23:36.204 } 00:23:36.204 }, 00:23:36.204 "base_bdevs_list": [ 00:23:36.204 { 00:23:36.204 "name": "spare", 00:23:36.204 "uuid": "2a406299-f692-5c62-947a-e1650d06174c", 00:23:36.204 "is_configured": true, 00:23:36.204 "data_offset": 256, 00:23:36.204 "data_size": 7936 00:23:36.204 }, 00:23:36.204 { 00:23:36.204 "name": "BaseBdev2", 00:23:36.204 "uuid": "0c0a0cba-5e53-5e32-b681-ad575aba048b", 00:23:36.204 "is_configured": true, 00:23:36.204 "data_offset": 256, 00:23:36.204 "data_size": 7936 00:23:36.204 } 00:23:36.204 ] 00:23:36.204 }' 00:23:36.204 12:22:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:36.204 12:22:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:36.204 12:22:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:36.204 12:22:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:36.204 12:22:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:23:36.204 12:22:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.204 12:22:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:36.204 [2024-11-25 12:22:32.250491] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:36.204 [2024-11-25 12:22:32.289757] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:36.204 [2024-11-25 12:22:32.289841] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:36.204 [2024-11-25 12:22:32.289864] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:36.204 [2024-11-25 12:22:32.289891] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:36.463 12:22:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.463 12:22:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:36.463 12:22:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:36.463 12:22:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:36.463 12:22:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:36.463 12:22:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:36.463 12:22:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:36.463 12:22:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:36.463 12:22:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:36.463 12:22:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:36.463 12:22:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:36.463 12:22:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:36.463 12:22:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.463 12:22:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:36.463 12:22:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:36.463 12:22:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.463 12:22:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:36.463 "name": "raid_bdev1", 00:23:36.463 "uuid": "c2e1fd8c-d738-46d9-afee-8fafc081acb8", 00:23:36.463 "strip_size_kb": 0, 00:23:36.463 "state": "online", 00:23:36.463 "raid_level": "raid1", 00:23:36.463 "superblock": true, 00:23:36.463 "num_base_bdevs": 2, 00:23:36.464 "num_base_bdevs_discovered": 1, 00:23:36.464 "num_base_bdevs_operational": 1, 00:23:36.464 "base_bdevs_list": [ 00:23:36.464 { 00:23:36.464 "name": null, 00:23:36.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:36.464 "is_configured": false, 00:23:36.464 "data_offset": 0, 00:23:36.464 "data_size": 7936 00:23:36.464 }, 00:23:36.464 { 00:23:36.464 "name": "BaseBdev2", 00:23:36.464 "uuid": "0c0a0cba-5e53-5e32-b681-ad575aba048b", 00:23:36.464 "is_configured": true, 00:23:36.464 "data_offset": 256, 00:23:36.464 "data_size": 7936 00:23:36.464 } 00:23:36.464 ] 00:23:36.464 }' 00:23:36.464 12:22:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:36.464 12:22:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:36.723 12:22:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:36.723 12:22:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.723 12:22:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:36.723 [2024-11-25 12:22:32.812175] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:36.723 [2024-11-25 12:22:32.812258] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:36.723 [2024-11-25 12:22:32.812292] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:23:36.723 [2024-11-25 12:22:32.812311] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:36.982 [2024-11-25 12:22:32.812637] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:36.982 [2024-11-25 12:22:32.812670] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:36.982 [2024-11-25 12:22:32.812752] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:36.982 [2024-11-25 12:22:32.812786] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:36.982 [2024-11-25 12:22:32.812799] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:36.982 [2024-11-25 12:22:32.812830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:36.982 [2024-11-25 12:22:32.825409] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:23:36.982 spare 00:23:36.982 12:22:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.982 12:22:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:23:36.982 [2024-11-25 12:22:32.827940] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:37.918 12:22:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:37.918 12:22:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:37.918 12:22:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:37.918 12:22:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:37.918 12:22:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:37.918 12:22:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:37.918 12:22:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:37.918 12:22:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.918 12:22:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:37.918 12:22:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.918 12:22:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:37.918 "name": "raid_bdev1", 00:23:37.918 "uuid": "c2e1fd8c-d738-46d9-afee-8fafc081acb8", 00:23:37.918 "strip_size_kb": 0, 00:23:37.918 "state": "online", 00:23:37.918 "raid_level": "raid1", 00:23:37.918 "superblock": true, 00:23:37.918 "num_base_bdevs": 2, 00:23:37.918 "num_base_bdevs_discovered": 2, 00:23:37.918 "num_base_bdevs_operational": 2, 00:23:37.918 "process": { 00:23:37.918 "type": "rebuild", 00:23:37.918 "target": "spare", 00:23:37.918 "progress": { 00:23:37.918 "blocks": 2560, 00:23:37.918 "percent": 32 00:23:37.918 } 00:23:37.918 }, 00:23:37.918 "base_bdevs_list": [ 00:23:37.918 { 00:23:37.918 "name": "spare", 00:23:37.918 "uuid": "2a406299-f692-5c62-947a-e1650d06174c", 00:23:37.918 "is_configured": true, 00:23:37.918 "data_offset": 256, 00:23:37.918 "data_size": 7936 00:23:37.918 }, 00:23:37.918 { 00:23:37.918 "name": "BaseBdev2", 00:23:37.918 "uuid": "0c0a0cba-5e53-5e32-b681-ad575aba048b", 00:23:37.918 "is_configured": true, 00:23:37.919 "data_offset": 256, 00:23:37.919 "data_size": 7936 00:23:37.919 } 00:23:37.919 ] 00:23:37.919 }' 00:23:37.919 12:22:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:37.919 12:22:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:37.919 12:22:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:37.919 12:22:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:37.919 12:22:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:23:37.919 12:22:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.919 12:22:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:37.919 [2024-11-25 12:22:33.989740] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:38.177 [2024-11-25 12:22:34.037055] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:38.177 [2024-11-25 12:22:34.037135] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:38.177 [2024-11-25 12:22:34.037163] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:38.177 [2024-11-25 12:22:34.037175] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:38.177 12:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.177 12:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:38.177 12:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:38.177 12:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:38.177 12:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:38.177 12:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:38.177 12:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:38.177 12:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:38.177 12:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:38.177 12:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:38.177 12:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:38.177 12:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:38.177 12:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:38.177 12:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.177 12:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:38.177 12:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.177 12:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:38.178 "name": "raid_bdev1", 00:23:38.178 "uuid": "c2e1fd8c-d738-46d9-afee-8fafc081acb8", 00:23:38.178 "strip_size_kb": 0, 00:23:38.178 "state": "online", 00:23:38.178 "raid_level": "raid1", 00:23:38.178 "superblock": true, 00:23:38.178 "num_base_bdevs": 2, 00:23:38.178 "num_base_bdevs_discovered": 1, 00:23:38.178 "num_base_bdevs_operational": 1, 00:23:38.178 "base_bdevs_list": [ 00:23:38.178 { 00:23:38.178 "name": null, 00:23:38.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:38.178 "is_configured": false, 00:23:38.178 "data_offset": 0, 00:23:38.178 "data_size": 7936 00:23:38.178 }, 00:23:38.178 { 00:23:38.178 "name": "BaseBdev2", 00:23:38.178 "uuid": "0c0a0cba-5e53-5e32-b681-ad575aba048b", 00:23:38.178 "is_configured": true, 00:23:38.178 "data_offset": 256, 00:23:38.178 "data_size": 7936 00:23:38.178 } 00:23:38.178 ] 00:23:38.178 }' 00:23:38.178 12:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:38.178 12:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:38.746 12:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:38.746 12:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:38.746 12:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:38.746 12:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:38.746 12:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:38.746 12:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:38.746 12:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.746 12:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:38.746 12:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:38.746 12:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.746 12:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:38.746 "name": "raid_bdev1", 00:23:38.746 "uuid": "c2e1fd8c-d738-46d9-afee-8fafc081acb8", 00:23:38.746 "strip_size_kb": 0, 00:23:38.746 "state": "online", 00:23:38.746 "raid_level": "raid1", 00:23:38.746 "superblock": true, 00:23:38.746 "num_base_bdevs": 2, 00:23:38.746 "num_base_bdevs_discovered": 1, 00:23:38.746 "num_base_bdevs_operational": 1, 00:23:38.746 "base_bdevs_list": [ 00:23:38.746 { 00:23:38.746 "name": null, 00:23:38.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:38.746 "is_configured": false, 00:23:38.746 "data_offset": 0, 00:23:38.746 "data_size": 7936 00:23:38.746 }, 00:23:38.746 { 00:23:38.746 "name": "BaseBdev2", 00:23:38.746 "uuid": "0c0a0cba-5e53-5e32-b681-ad575aba048b", 00:23:38.746 "is_configured": true, 00:23:38.746 "data_offset": 256, 00:23:38.746 "data_size": 7936 00:23:38.746 } 00:23:38.746 ] 00:23:38.746 }' 00:23:38.746 12:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:38.746 12:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:38.746 12:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:38.746 12:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:38.746 12:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:23:38.746 12:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.746 12:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:38.746 12:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.746 12:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:38.746 12:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.746 12:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:38.746 [2024-11-25 12:22:34.739728] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:38.746 [2024-11-25 12:22:34.739837] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:38.746 [2024-11-25 12:22:34.739887] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:23:38.746 [2024-11-25 12:22:34.739904] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:38.746 [2024-11-25 12:22:34.740193] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:38.746 [2024-11-25 12:22:34.740216] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:38.746 [2024-11-25 12:22:34.740288] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:23:38.747 [2024-11-25 12:22:34.740308] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:38.747 [2024-11-25 12:22:34.740322] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:38.747 [2024-11-25 12:22:34.740335] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:23:38.747 BaseBdev1 00:23:38.747 12:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.747 12:22:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:23:39.683 12:22:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:39.683 12:22:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:39.683 12:22:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:39.683 12:22:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:39.683 12:22:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:39.683 12:22:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:39.683 12:22:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:39.683 12:22:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:39.683 12:22:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:39.683 12:22:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:39.683 12:22:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:39.683 12:22:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:39.683 12:22:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.683 12:22:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:39.683 12:22:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.942 12:22:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:39.942 "name": "raid_bdev1", 00:23:39.942 "uuid": "c2e1fd8c-d738-46d9-afee-8fafc081acb8", 00:23:39.942 "strip_size_kb": 0, 00:23:39.942 "state": "online", 00:23:39.942 "raid_level": "raid1", 00:23:39.942 "superblock": true, 00:23:39.942 "num_base_bdevs": 2, 00:23:39.942 "num_base_bdevs_discovered": 1, 00:23:39.942 "num_base_bdevs_operational": 1, 00:23:39.942 "base_bdevs_list": [ 00:23:39.942 { 00:23:39.942 "name": null, 00:23:39.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:39.942 "is_configured": false, 00:23:39.942 "data_offset": 0, 00:23:39.942 "data_size": 7936 00:23:39.942 }, 00:23:39.942 { 00:23:39.942 "name": "BaseBdev2", 00:23:39.942 "uuid": "0c0a0cba-5e53-5e32-b681-ad575aba048b", 00:23:39.942 "is_configured": true, 00:23:39.942 "data_offset": 256, 00:23:39.942 "data_size": 7936 00:23:39.942 } 00:23:39.942 ] 00:23:39.942 }' 00:23:39.942 12:22:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:39.942 12:22:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:40.219 12:22:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:40.219 12:22:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:40.219 12:22:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:40.219 12:22:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:40.219 12:22:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:40.219 12:22:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:40.219 12:22:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.219 12:22:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:40.219 12:22:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:40.219 12:22:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.219 12:22:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:40.219 "name": "raid_bdev1", 00:23:40.219 "uuid": "c2e1fd8c-d738-46d9-afee-8fafc081acb8", 00:23:40.219 "strip_size_kb": 0, 00:23:40.219 "state": "online", 00:23:40.219 "raid_level": "raid1", 00:23:40.219 "superblock": true, 00:23:40.219 "num_base_bdevs": 2, 00:23:40.219 "num_base_bdevs_discovered": 1, 00:23:40.219 "num_base_bdevs_operational": 1, 00:23:40.219 "base_bdevs_list": [ 00:23:40.219 { 00:23:40.219 "name": null, 00:23:40.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:40.219 "is_configured": false, 00:23:40.219 "data_offset": 0, 00:23:40.219 "data_size": 7936 00:23:40.219 }, 00:23:40.219 { 00:23:40.219 "name": "BaseBdev2", 00:23:40.219 "uuid": "0c0a0cba-5e53-5e32-b681-ad575aba048b", 00:23:40.219 "is_configured": true, 00:23:40.219 "data_offset": 256, 00:23:40.219 "data_size": 7936 00:23:40.219 } 00:23:40.219 ] 00:23:40.219 }' 00:23:40.219 12:22:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:40.507 12:22:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:40.507 12:22:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:40.507 12:22:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:40.507 12:22:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:40.507 12:22:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:23:40.507 12:22:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:40.507 12:22:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:40.507 12:22:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:40.507 12:22:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:40.507 12:22:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:40.507 12:22:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:40.507 12:22:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.507 12:22:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:40.507 [2024-11-25 12:22:36.400397] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:40.507 [2024-11-25 12:22:36.400619] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:40.507 [2024-11-25 12:22:36.400645] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:40.507 request: 00:23:40.507 { 00:23:40.507 "base_bdev": "BaseBdev1", 00:23:40.507 "raid_bdev": "raid_bdev1", 00:23:40.507 "method": "bdev_raid_add_base_bdev", 00:23:40.507 "req_id": 1 00:23:40.507 } 00:23:40.507 Got JSON-RPC error response 00:23:40.507 response: 00:23:40.507 { 00:23:40.507 "code": -22, 00:23:40.507 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:23:40.507 } 00:23:40.507 12:22:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:40.507 12:22:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:23:40.507 12:22:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:40.507 12:22:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:40.507 12:22:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:40.507 12:22:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:23:41.444 12:22:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:41.444 12:22:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:41.444 12:22:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:41.444 12:22:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:41.444 12:22:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:41.444 12:22:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:41.444 12:22:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:41.444 12:22:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:41.444 12:22:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:41.444 12:22:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:41.444 12:22:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:41.444 12:22:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.444 12:22:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:41.444 12:22:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:41.444 12:22:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.444 12:22:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:41.444 "name": "raid_bdev1", 00:23:41.444 "uuid": "c2e1fd8c-d738-46d9-afee-8fafc081acb8", 00:23:41.444 "strip_size_kb": 0, 00:23:41.444 "state": "online", 00:23:41.444 "raid_level": "raid1", 00:23:41.444 "superblock": true, 00:23:41.444 "num_base_bdevs": 2, 00:23:41.444 "num_base_bdevs_discovered": 1, 00:23:41.444 "num_base_bdevs_operational": 1, 00:23:41.444 "base_bdevs_list": [ 00:23:41.444 { 00:23:41.444 "name": null, 00:23:41.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:41.444 "is_configured": false, 00:23:41.444 "data_offset": 0, 00:23:41.444 "data_size": 7936 00:23:41.444 }, 00:23:41.444 { 00:23:41.444 "name": "BaseBdev2", 00:23:41.444 "uuid": "0c0a0cba-5e53-5e32-b681-ad575aba048b", 00:23:41.444 "is_configured": true, 00:23:41.444 "data_offset": 256, 00:23:41.444 "data_size": 7936 00:23:41.444 } 00:23:41.444 ] 00:23:41.444 }' 00:23:41.444 12:22:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:41.444 12:22:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:42.012 12:22:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:42.012 12:22:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:42.012 12:22:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:42.012 12:22:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:42.012 12:22:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:42.012 12:22:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:42.012 12:22:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:42.012 12:22:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.012 12:22:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:42.012 12:22:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.012 12:22:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:42.012 "name": "raid_bdev1", 00:23:42.012 "uuid": "c2e1fd8c-d738-46d9-afee-8fafc081acb8", 00:23:42.012 "strip_size_kb": 0, 00:23:42.012 "state": "online", 00:23:42.012 "raid_level": "raid1", 00:23:42.012 "superblock": true, 00:23:42.012 "num_base_bdevs": 2, 00:23:42.012 "num_base_bdevs_discovered": 1, 00:23:42.012 "num_base_bdevs_operational": 1, 00:23:42.012 "base_bdevs_list": [ 00:23:42.012 { 00:23:42.012 "name": null, 00:23:42.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:42.012 "is_configured": false, 00:23:42.012 "data_offset": 0, 00:23:42.012 "data_size": 7936 00:23:42.012 }, 00:23:42.012 { 00:23:42.012 "name": "BaseBdev2", 00:23:42.012 "uuid": "0c0a0cba-5e53-5e32-b681-ad575aba048b", 00:23:42.012 "is_configured": true, 00:23:42.012 "data_offset": 256, 00:23:42.012 "data_size": 7936 00:23:42.012 } 00:23:42.012 ] 00:23:42.012 }' 00:23:42.012 12:22:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:42.012 12:22:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:42.012 12:22:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:42.272 12:22:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:42.272 12:22:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 88206 00:23:42.272 12:22:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 88206 ']' 00:23:42.272 12:22:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 88206 00:23:42.272 12:22:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:23:42.272 12:22:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:42.272 12:22:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88206 00:23:42.272 12:22:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:42.272 12:22:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:42.272 killing process with pid 88206 00:23:42.272 Received shutdown signal, test time was about 60.000000 seconds 00:23:42.272 00:23:42.272 Latency(us) 00:23:42.272 [2024-11-25T12:22:38.363Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:42.272 [2024-11-25T12:22:38.363Z] =================================================================================================================== 00:23:42.272 [2024-11-25T12:22:38.363Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:42.272 12:22:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88206' 00:23:42.272 12:22:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 88206 00:23:42.272 [2024-11-25 12:22:38.169587] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:42.272 12:22:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 88206 00:23:42.272 [2024-11-25 12:22:38.169750] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:42.272 [2024-11-25 12:22:38.169815] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:42.272 [2024-11-25 12:22:38.169845] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:23:42.531 [2024-11-25 12:22:38.458120] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:43.465 12:22:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:23:43.465 00:23:43.465 real 0m21.911s 00:23:43.465 user 0m29.560s 00:23:43.465 sys 0m2.617s 00:23:43.465 12:22:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:43.465 12:22:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:43.465 ************************************ 00:23:43.465 END TEST raid_rebuild_test_sb_md_separate 00:23:43.465 ************************************ 00:23:43.465 12:22:39 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:23:43.465 12:22:39 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:23:43.465 12:22:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:23:43.465 12:22:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:43.465 12:22:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:43.724 ************************************ 00:23:43.724 START TEST raid_state_function_test_sb_md_interleaved 00:23:43.724 ************************************ 00:23:43.724 12:22:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:23:43.724 12:22:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:23:43.724 12:22:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:23:43.724 12:22:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:23:43.724 12:22:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:23:43.724 12:22:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:23:43.724 12:22:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:43.724 12:22:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:23:43.724 12:22:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:43.724 12:22:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:43.724 12:22:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:23:43.724 12:22:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:43.724 12:22:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:43.724 12:22:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:43.724 12:22:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:23:43.724 12:22:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:23:43.724 12:22:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:23:43.724 12:22:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:23:43.724 12:22:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:23:43.724 12:22:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:23:43.724 12:22:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:23:43.724 12:22:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:23:43.724 12:22:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:23:43.724 12:22:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88914 00:23:43.724 12:22:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:23:43.724 Process raid pid: 88914 00:23:43.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:43.724 12:22:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88914' 00:23:43.724 12:22:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88914 00:23:43.724 12:22:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88914 ']' 00:23:43.724 12:22:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:43.724 12:22:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:43.724 12:22:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:43.724 12:22:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:43.724 12:22:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:43.724 [2024-11-25 12:22:39.656237] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:23:43.724 [2024-11-25 12:22:39.656412] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:43.983 [2024-11-25 12:22:39.831745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:43.983 [2024-11-25 12:22:39.963915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:44.241 [2024-11-25 12:22:40.176784] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:44.241 [2024-11-25 12:22:40.176860] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:44.809 12:22:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:44.809 12:22:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:23:44.809 12:22:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:23:44.809 12:22:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.809 12:22:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:44.809 [2024-11-25 12:22:40.621118] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:44.809 [2024-11-25 12:22:40.621189] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:44.809 [2024-11-25 12:22:40.621208] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:44.809 [2024-11-25 12:22:40.621225] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:44.809 12:22:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.809 12:22:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:23:44.809 12:22:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:44.809 12:22:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:44.809 12:22:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:44.809 12:22:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:44.809 12:22:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:44.809 12:22:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:44.809 12:22:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:44.809 12:22:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:44.809 12:22:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:44.809 12:22:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:44.809 12:22:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:44.809 12:22:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.809 12:22:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:44.809 12:22:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.809 12:22:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:44.809 "name": "Existed_Raid", 00:23:44.809 "uuid": "ade493e4-6a08-48d8-b083-7a1bc26ba48a", 00:23:44.809 "strip_size_kb": 0, 00:23:44.809 "state": "configuring", 00:23:44.809 "raid_level": "raid1", 00:23:44.809 "superblock": true, 00:23:44.809 "num_base_bdevs": 2, 00:23:44.809 "num_base_bdevs_discovered": 0, 00:23:44.809 "num_base_bdevs_operational": 2, 00:23:44.809 "base_bdevs_list": [ 00:23:44.809 { 00:23:44.809 "name": "BaseBdev1", 00:23:44.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:44.809 "is_configured": false, 00:23:44.809 "data_offset": 0, 00:23:44.809 "data_size": 0 00:23:44.809 }, 00:23:44.809 { 00:23:44.809 "name": "BaseBdev2", 00:23:44.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:44.809 "is_configured": false, 00:23:44.809 "data_offset": 0, 00:23:44.809 "data_size": 0 00:23:44.809 } 00:23:44.809 ] 00:23:44.809 }' 00:23:44.809 12:22:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:44.809 12:22:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:45.068 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:45.068 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.068 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:45.068 [2024-11-25 12:22:41.125150] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:45.068 [2024-11-25 12:22:41.125351] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:23:45.068 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.068 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:23:45.068 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.068 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:45.068 [2024-11-25 12:22:41.133136] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:45.068 [2024-11-25 12:22:41.133322] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:45.068 [2024-11-25 12:22:41.133362] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:45.068 [2024-11-25 12:22:41.133385] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:45.068 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.068 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:23:45.068 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.068 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:45.329 [2024-11-25 12:22:41.178291] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:45.329 BaseBdev1 00:23:45.329 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.329 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:23:45.329 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:23:45.329 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:45.329 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:23:45.329 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:45.329 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:45.329 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:45.329 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.329 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:45.329 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.329 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:45.329 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.329 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:45.329 [ 00:23:45.329 { 00:23:45.329 "name": "BaseBdev1", 00:23:45.329 "aliases": [ 00:23:45.329 "22a5a8a4-2b39-449c-87ad-5edadfe6ee53" 00:23:45.329 ], 00:23:45.329 "product_name": "Malloc disk", 00:23:45.329 "block_size": 4128, 00:23:45.329 "num_blocks": 8192, 00:23:45.329 "uuid": "22a5a8a4-2b39-449c-87ad-5edadfe6ee53", 00:23:45.329 "md_size": 32, 00:23:45.329 "md_interleave": true, 00:23:45.329 "dif_type": 0, 00:23:45.329 "assigned_rate_limits": { 00:23:45.329 "rw_ios_per_sec": 0, 00:23:45.329 "rw_mbytes_per_sec": 0, 00:23:45.329 "r_mbytes_per_sec": 0, 00:23:45.329 "w_mbytes_per_sec": 0 00:23:45.329 }, 00:23:45.329 "claimed": true, 00:23:45.329 "claim_type": "exclusive_write", 00:23:45.329 "zoned": false, 00:23:45.329 "supported_io_types": { 00:23:45.329 "read": true, 00:23:45.329 "write": true, 00:23:45.329 "unmap": true, 00:23:45.329 "flush": true, 00:23:45.329 "reset": true, 00:23:45.329 "nvme_admin": false, 00:23:45.329 "nvme_io": false, 00:23:45.329 "nvme_io_md": false, 00:23:45.329 "write_zeroes": true, 00:23:45.329 "zcopy": true, 00:23:45.329 "get_zone_info": false, 00:23:45.329 "zone_management": false, 00:23:45.329 "zone_append": false, 00:23:45.329 "compare": false, 00:23:45.329 "compare_and_write": false, 00:23:45.329 "abort": true, 00:23:45.329 "seek_hole": false, 00:23:45.329 "seek_data": false, 00:23:45.329 "copy": true, 00:23:45.329 "nvme_iov_md": false 00:23:45.329 }, 00:23:45.329 "memory_domains": [ 00:23:45.329 { 00:23:45.329 "dma_device_id": "system", 00:23:45.329 "dma_device_type": 1 00:23:45.329 }, 00:23:45.329 { 00:23:45.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:45.329 "dma_device_type": 2 00:23:45.329 } 00:23:45.329 ], 00:23:45.329 "driver_specific": {} 00:23:45.329 } 00:23:45.329 ] 00:23:45.329 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.329 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:23:45.329 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:23:45.329 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:45.329 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:45.329 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:45.329 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:45.329 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:45.329 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:45.329 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:45.329 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:45.329 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:45.329 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:45.329 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:45.329 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.329 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:45.329 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.329 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:45.329 "name": "Existed_Raid", 00:23:45.329 "uuid": "a7b3f50f-3f40-4313-bf9f-90efbbf15c56", 00:23:45.329 "strip_size_kb": 0, 00:23:45.329 "state": "configuring", 00:23:45.329 "raid_level": "raid1", 00:23:45.329 "superblock": true, 00:23:45.329 "num_base_bdevs": 2, 00:23:45.329 "num_base_bdevs_discovered": 1, 00:23:45.329 "num_base_bdevs_operational": 2, 00:23:45.329 "base_bdevs_list": [ 00:23:45.329 { 00:23:45.329 "name": "BaseBdev1", 00:23:45.329 "uuid": "22a5a8a4-2b39-449c-87ad-5edadfe6ee53", 00:23:45.329 "is_configured": true, 00:23:45.329 "data_offset": 256, 00:23:45.329 "data_size": 7936 00:23:45.329 }, 00:23:45.329 { 00:23:45.329 "name": "BaseBdev2", 00:23:45.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:45.329 "is_configured": false, 00:23:45.329 "data_offset": 0, 00:23:45.329 "data_size": 0 00:23:45.329 } 00:23:45.329 ] 00:23:45.329 }' 00:23:45.329 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:45.329 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:45.897 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:45.897 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.897 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:45.897 [2024-11-25 12:22:41.706572] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:45.897 [2024-11-25 12:22:41.706632] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:23:45.897 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.897 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:23:45.897 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.897 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:45.897 [2024-11-25 12:22:41.714615] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:45.897 [2024-11-25 12:22:41.717169] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:45.897 [2024-11-25 12:22:41.717365] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:45.897 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.897 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:23:45.897 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:45.897 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:23:45.897 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:45.897 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:45.897 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:45.897 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:45.897 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:45.897 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:45.897 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:45.897 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:45.897 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:45.897 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:45.897 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.897 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:45.897 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:45.897 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.898 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:45.898 "name": "Existed_Raid", 00:23:45.898 "uuid": "dcd2ee04-8e54-48ec-9b75-e875f4bbccc5", 00:23:45.898 "strip_size_kb": 0, 00:23:45.898 "state": "configuring", 00:23:45.898 "raid_level": "raid1", 00:23:45.898 "superblock": true, 00:23:45.898 "num_base_bdevs": 2, 00:23:45.898 "num_base_bdevs_discovered": 1, 00:23:45.898 "num_base_bdevs_operational": 2, 00:23:45.898 "base_bdevs_list": [ 00:23:45.898 { 00:23:45.898 "name": "BaseBdev1", 00:23:45.898 "uuid": "22a5a8a4-2b39-449c-87ad-5edadfe6ee53", 00:23:45.898 "is_configured": true, 00:23:45.898 "data_offset": 256, 00:23:45.898 "data_size": 7936 00:23:45.898 }, 00:23:45.898 { 00:23:45.898 "name": "BaseBdev2", 00:23:45.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:45.898 "is_configured": false, 00:23:45.898 "data_offset": 0, 00:23:45.898 "data_size": 0 00:23:45.898 } 00:23:45.898 ] 00:23:45.898 }' 00:23:45.898 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:45.898 12:22:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:46.156 12:22:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:23:46.156 12:22:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.156 12:22:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:46.414 [2024-11-25 12:22:42.249485] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:46.414 [2024-11-25 12:22:42.249748] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:23:46.414 [2024-11-25 12:22:42.249770] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:46.414 [2024-11-25 12:22:42.249873] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:23:46.414 [2024-11-25 12:22:42.249977] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:23:46.414 [2024-11-25 12:22:42.249997] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:23:46.414 BaseBdev2 00:23:46.414 [2024-11-25 12:22:42.250089] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:46.414 12:22:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.414 12:22:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:23:46.414 12:22:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:23:46.414 12:22:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:46.414 12:22:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:23:46.414 12:22:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:46.414 12:22:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:46.414 12:22:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:46.414 12:22:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.414 12:22:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:46.414 12:22:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.414 12:22:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:46.414 12:22:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.414 12:22:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:46.414 [ 00:23:46.414 { 00:23:46.414 "name": "BaseBdev2", 00:23:46.414 "aliases": [ 00:23:46.414 "a0e1ec8b-ebcb-4075-a268-956599ad32d2" 00:23:46.414 ], 00:23:46.414 "product_name": "Malloc disk", 00:23:46.414 "block_size": 4128, 00:23:46.414 "num_blocks": 8192, 00:23:46.414 "uuid": "a0e1ec8b-ebcb-4075-a268-956599ad32d2", 00:23:46.414 "md_size": 32, 00:23:46.414 "md_interleave": true, 00:23:46.414 "dif_type": 0, 00:23:46.414 "assigned_rate_limits": { 00:23:46.414 "rw_ios_per_sec": 0, 00:23:46.414 "rw_mbytes_per_sec": 0, 00:23:46.414 "r_mbytes_per_sec": 0, 00:23:46.414 "w_mbytes_per_sec": 0 00:23:46.414 }, 00:23:46.414 "claimed": true, 00:23:46.414 "claim_type": "exclusive_write", 00:23:46.414 "zoned": false, 00:23:46.414 "supported_io_types": { 00:23:46.414 "read": true, 00:23:46.414 "write": true, 00:23:46.414 "unmap": true, 00:23:46.414 "flush": true, 00:23:46.414 "reset": true, 00:23:46.414 "nvme_admin": false, 00:23:46.414 "nvme_io": false, 00:23:46.414 "nvme_io_md": false, 00:23:46.414 "write_zeroes": true, 00:23:46.414 "zcopy": true, 00:23:46.414 "get_zone_info": false, 00:23:46.414 "zone_management": false, 00:23:46.414 "zone_append": false, 00:23:46.414 "compare": false, 00:23:46.414 "compare_and_write": false, 00:23:46.414 "abort": true, 00:23:46.414 "seek_hole": false, 00:23:46.414 "seek_data": false, 00:23:46.414 "copy": true, 00:23:46.414 "nvme_iov_md": false 00:23:46.414 }, 00:23:46.414 "memory_domains": [ 00:23:46.415 { 00:23:46.415 "dma_device_id": "system", 00:23:46.415 "dma_device_type": 1 00:23:46.415 }, 00:23:46.415 { 00:23:46.415 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:46.415 "dma_device_type": 2 00:23:46.415 } 00:23:46.415 ], 00:23:46.415 "driver_specific": {} 00:23:46.415 } 00:23:46.415 ] 00:23:46.415 12:22:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.415 12:22:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:23:46.415 12:22:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:23:46.415 12:22:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:46.415 12:22:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:23:46.415 12:22:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:46.415 12:22:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:46.415 12:22:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:46.415 12:22:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:46.415 12:22:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:46.415 12:22:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:46.415 12:22:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:46.415 12:22:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:46.415 12:22:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:46.415 12:22:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:46.415 12:22:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:46.415 12:22:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.415 12:22:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:46.415 12:22:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.415 12:22:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:46.415 "name": "Existed_Raid", 00:23:46.415 "uuid": "dcd2ee04-8e54-48ec-9b75-e875f4bbccc5", 00:23:46.415 "strip_size_kb": 0, 00:23:46.415 "state": "online", 00:23:46.415 "raid_level": "raid1", 00:23:46.415 "superblock": true, 00:23:46.415 "num_base_bdevs": 2, 00:23:46.415 "num_base_bdevs_discovered": 2, 00:23:46.415 "num_base_bdevs_operational": 2, 00:23:46.415 "base_bdevs_list": [ 00:23:46.415 { 00:23:46.415 "name": "BaseBdev1", 00:23:46.415 "uuid": "22a5a8a4-2b39-449c-87ad-5edadfe6ee53", 00:23:46.415 "is_configured": true, 00:23:46.415 "data_offset": 256, 00:23:46.415 "data_size": 7936 00:23:46.415 }, 00:23:46.415 { 00:23:46.415 "name": "BaseBdev2", 00:23:46.415 "uuid": "a0e1ec8b-ebcb-4075-a268-956599ad32d2", 00:23:46.415 "is_configured": true, 00:23:46.415 "data_offset": 256, 00:23:46.415 "data_size": 7936 00:23:46.415 } 00:23:46.415 ] 00:23:46.415 }' 00:23:46.415 12:22:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:46.415 12:22:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:46.981 12:22:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:23:46.981 12:22:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:23:46.981 12:22:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:46.981 12:22:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:46.981 12:22:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:23:46.981 12:22:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:46.981 12:22:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:23:46.981 12:22:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.981 12:22:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:46.981 12:22:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:46.981 [2024-11-25 12:22:42.822060] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:46.982 12:22:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.982 12:22:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:46.982 "name": "Existed_Raid", 00:23:46.982 "aliases": [ 00:23:46.982 "dcd2ee04-8e54-48ec-9b75-e875f4bbccc5" 00:23:46.982 ], 00:23:46.982 "product_name": "Raid Volume", 00:23:46.982 "block_size": 4128, 00:23:46.982 "num_blocks": 7936, 00:23:46.982 "uuid": "dcd2ee04-8e54-48ec-9b75-e875f4bbccc5", 00:23:46.982 "md_size": 32, 00:23:46.982 "md_interleave": true, 00:23:46.982 "dif_type": 0, 00:23:46.982 "assigned_rate_limits": { 00:23:46.982 "rw_ios_per_sec": 0, 00:23:46.982 "rw_mbytes_per_sec": 0, 00:23:46.982 "r_mbytes_per_sec": 0, 00:23:46.982 "w_mbytes_per_sec": 0 00:23:46.982 }, 00:23:46.982 "claimed": false, 00:23:46.982 "zoned": false, 00:23:46.982 "supported_io_types": { 00:23:46.982 "read": true, 00:23:46.982 "write": true, 00:23:46.982 "unmap": false, 00:23:46.982 "flush": false, 00:23:46.982 "reset": true, 00:23:46.982 "nvme_admin": false, 00:23:46.982 "nvme_io": false, 00:23:46.982 "nvme_io_md": false, 00:23:46.982 "write_zeroes": true, 00:23:46.982 "zcopy": false, 00:23:46.982 "get_zone_info": false, 00:23:46.982 "zone_management": false, 00:23:46.982 "zone_append": false, 00:23:46.982 "compare": false, 00:23:46.982 "compare_and_write": false, 00:23:46.982 "abort": false, 00:23:46.982 "seek_hole": false, 00:23:46.982 "seek_data": false, 00:23:46.982 "copy": false, 00:23:46.982 "nvme_iov_md": false 00:23:46.982 }, 00:23:46.982 "memory_domains": [ 00:23:46.982 { 00:23:46.982 "dma_device_id": "system", 00:23:46.982 "dma_device_type": 1 00:23:46.982 }, 00:23:46.982 { 00:23:46.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:46.982 "dma_device_type": 2 00:23:46.982 }, 00:23:46.982 { 00:23:46.982 "dma_device_id": "system", 00:23:46.982 "dma_device_type": 1 00:23:46.982 }, 00:23:46.982 { 00:23:46.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:46.982 "dma_device_type": 2 00:23:46.982 } 00:23:46.982 ], 00:23:46.982 "driver_specific": { 00:23:46.982 "raid": { 00:23:46.982 "uuid": "dcd2ee04-8e54-48ec-9b75-e875f4bbccc5", 00:23:46.982 "strip_size_kb": 0, 00:23:46.982 "state": "online", 00:23:46.982 "raid_level": "raid1", 00:23:46.982 "superblock": true, 00:23:46.982 "num_base_bdevs": 2, 00:23:46.982 "num_base_bdevs_discovered": 2, 00:23:46.982 "num_base_bdevs_operational": 2, 00:23:46.982 "base_bdevs_list": [ 00:23:46.982 { 00:23:46.982 "name": "BaseBdev1", 00:23:46.982 "uuid": "22a5a8a4-2b39-449c-87ad-5edadfe6ee53", 00:23:46.982 "is_configured": true, 00:23:46.982 "data_offset": 256, 00:23:46.982 "data_size": 7936 00:23:46.982 }, 00:23:46.982 { 00:23:46.982 "name": "BaseBdev2", 00:23:46.982 "uuid": "a0e1ec8b-ebcb-4075-a268-956599ad32d2", 00:23:46.982 "is_configured": true, 00:23:46.982 "data_offset": 256, 00:23:46.982 "data_size": 7936 00:23:46.982 } 00:23:46.982 ] 00:23:46.982 } 00:23:46.982 } 00:23:46.982 }' 00:23:46.982 12:22:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:46.982 12:22:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:23:46.982 BaseBdev2' 00:23:46.982 12:22:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:46.982 12:22:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:23:46.982 12:22:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:46.982 12:22:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:23:46.982 12:22:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:46.982 12:22:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.982 12:22:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:46.982 12:22:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.982 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:23:46.982 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:23:46.982 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:46.982 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:46.982 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:23:46.982 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.982 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:46.982 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.240 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:23:47.240 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:23:47.240 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:23:47.240 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.240 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:47.240 [2024-11-25 12:22:43.081819] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:47.240 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.240 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:23:47.240 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:23:47.240 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:47.240 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:23:47.240 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:23:47.240 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:23:47.240 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:47.240 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:47.240 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:47.240 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:47.240 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:47.240 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:47.240 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:47.240 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:47.240 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:47.240 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:47.240 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:47.240 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.240 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:47.240 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.240 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:47.240 "name": "Existed_Raid", 00:23:47.240 "uuid": "dcd2ee04-8e54-48ec-9b75-e875f4bbccc5", 00:23:47.240 "strip_size_kb": 0, 00:23:47.240 "state": "online", 00:23:47.240 "raid_level": "raid1", 00:23:47.240 "superblock": true, 00:23:47.240 "num_base_bdevs": 2, 00:23:47.240 "num_base_bdevs_discovered": 1, 00:23:47.240 "num_base_bdevs_operational": 1, 00:23:47.240 "base_bdevs_list": [ 00:23:47.240 { 00:23:47.240 "name": null, 00:23:47.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:47.240 "is_configured": false, 00:23:47.240 "data_offset": 0, 00:23:47.240 "data_size": 7936 00:23:47.240 }, 00:23:47.240 { 00:23:47.240 "name": "BaseBdev2", 00:23:47.240 "uuid": "a0e1ec8b-ebcb-4075-a268-956599ad32d2", 00:23:47.240 "is_configured": true, 00:23:47.240 "data_offset": 256, 00:23:47.240 "data_size": 7936 00:23:47.240 } 00:23:47.240 ] 00:23:47.240 }' 00:23:47.240 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:47.240 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:47.806 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:23:47.806 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:47.806 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:47.806 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:23:47.806 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.806 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:47.806 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.806 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:23:47.806 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:47.806 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:23:47.807 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.807 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:47.807 [2024-11-25 12:22:43.755065] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:47.807 [2024-11-25 12:22:43.755202] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:47.807 [2024-11-25 12:22:43.839691] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:47.807 [2024-11-25 12:22:43.839760] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:47.807 [2024-11-25 12:22:43.839782] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:23:47.807 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.807 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:23:47.807 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:47.807 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:47.807 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:23:47.807 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.807 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:47.807 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.065 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:23:48.065 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:23:48.065 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:23:48.065 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88914 00:23:48.065 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88914 ']' 00:23:48.065 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88914 00:23:48.065 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:23:48.065 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:48.065 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88914 00:23:48.065 killing process with pid 88914 00:23:48.065 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:48.065 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:48.065 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88914' 00:23:48.065 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88914 00:23:48.065 [2024-11-25 12:22:43.928655] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:48.065 12:22:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88914 00:23:48.065 [2024-11-25 12:22:43.943400] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:49.001 12:22:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:23:49.001 00:23:49.001 real 0m5.411s 00:23:49.001 user 0m8.132s 00:23:49.001 sys 0m0.816s 00:23:49.001 12:22:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:49.001 12:22:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:49.001 ************************************ 00:23:49.001 END TEST raid_state_function_test_sb_md_interleaved 00:23:49.001 ************************************ 00:23:49.001 12:22:45 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:23:49.001 12:22:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:23:49.001 12:22:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:49.001 12:22:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:49.001 ************************************ 00:23:49.001 START TEST raid_superblock_test_md_interleaved 00:23:49.001 ************************************ 00:23:49.001 12:22:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:23:49.001 12:22:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:23:49.001 12:22:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:23:49.001 12:22:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:23:49.001 12:22:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:23:49.001 12:22:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:23:49.001 12:22:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:23:49.001 12:22:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:23:49.001 12:22:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:23:49.001 12:22:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:23:49.001 12:22:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:23:49.001 12:22:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:23:49.001 12:22:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:23:49.001 12:22:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:23:49.001 12:22:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:23:49.001 12:22:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:23:49.001 12:22:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=89161 00:23:49.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:49.001 12:22:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 89161 00:23:49.001 12:22:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89161 ']' 00:23:49.001 12:22:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:49.001 12:22:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:49.001 12:22:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:23:49.001 12:22:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:49.001 12:22:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:49.001 12:22:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:49.260 [2024-11-25 12:22:45.142967] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:23:49.260 [2024-11-25 12:22:45.143142] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89161 ] 00:23:49.260 [2024-11-25 12:22:45.330603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:49.520 [2024-11-25 12:22:45.465149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:49.779 [2024-11-25 12:22:45.673272] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:49.779 [2024-11-25 12:22:45.673540] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:50.036 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:50.036 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:23:50.036 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:23:50.036 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:50.036 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:23:50.036 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:23:50.036 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:23:50.036 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:50.036 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:50.036 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:50.036 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:23:50.036 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.036 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:50.294 malloc1 00:23:50.294 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.294 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:50.294 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.294 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:50.294 [2024-11-25 12:22:46.141470] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:50.294 [2024-11-25 12:22:46.141675] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:50.294 [2024-11-25 12:22:46.141721] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:50.294 [2024-11-25 12:22:46.141739] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:50.294 [2024-11-25 12:22:46.144227] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:50.294 [2024-11-25 12:22:46.144275] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:50.294 pt1 00:23:50.294 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.294 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:50.294 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:50.294 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:23:50.294 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:23:50.294 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:23:50.294 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:50.294 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:50.294 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:50.294 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:23:50.294 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.294 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:50.294 malloc2 00:23:50.294 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.294 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:50.294 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.294 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:50.294 [2024-11-25 12:22:46.190060] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:50.294 [2024-11-25 12:22:46.190130] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:50.294 [2024-11-25 12:22:46.190165] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:50.294 [2024-11-25 12:22:46.190181] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:50.294 [2024-11-25 12:22:46.192681] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:50.294 [2024-11-25 12:22:46.192863] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:50.294 pt2 00:23:50.294 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.294 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:50.294 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:50.294 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:23:50.294 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.294 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:50.294 [2024-11-25 12:22:46.198107] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:50.294 [2024-11-25 12:22:46.200785] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:50.294 [2024-11-25 12:22:46.201173] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:50.294 [2024-11-25 12:22:46.201316] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:50.294 [2024-11-25 12:22:46.201500] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:23:50.294 [2024-11-25 12:22:46.201734] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:50.294 [2024-11-25 12:22:46.201872] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:23:50.295 [2024-11-25 12:22:46.202149] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:50.295 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.295 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:50.295 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:50.295 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:50.295 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:50.295 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:50.295 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:50.295 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:50.295 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:50.295 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:50.295 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:50.295 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:50.295 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:50.295 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.295 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:50.295 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.295 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:50.295 "name": "raid_bdev1", 00:23:50.295 "uuid": "75517d3f-3214-4611-9602-999927e0af41", 00:23:50.295 "strip_size_kb": 0, 00:23:50.295 "state": "online", 00:23:50.295 "raid_level": "raid1", 00:23:50.295 "superblock": true, 00:23:50.295 "num_base_bdevs": 2, 00:23:50.295 "num_base_bdevs_discovered": 2, 00:23:50.295 "num_base_bdevs_operational": 2, 00:23:50.295 "base_bdevs_list": [ 00:23:50.295 { 00:23:50.295 "name": "pt1", 00:23:50.295 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:50.295 "is_configured": true, 00:23:50.295 "data_offset": 256, 00:23:50.295 "data_size": 7936 00:23:50.295 }, 00:23:50.295 { 00:23:50.295 "name": "pt2", 00:23:50.295 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:50.295 "is_configured": true, 00:23:50.295 "data_offset": 256, 00:23:50.295 "data_size": 7936 00:23:50.295 } 00:23:50.295 ] 00:23:50.295 }' 00:23:50.295 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:50.295 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:50.862 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:23:50.862 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:23:50.862 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:50.862 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:50.862 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:23:50.862 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:50.862 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:50.862 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.862 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:50.862 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:50.862 [2024-11-25 12:22:46.762738] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:50.862 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.862 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:50.862 "name": "raid_bdev1", 00:23:50.862 "aliases": [ 00:23:50.862 "75517d3f-3214-4611-9602-999927e0af41" 00:23:50.862 ], 00:23:50.862 "product_name": "Raid Volume", 00:23:50.862 "block_size": 4128, 00:23:50.862 "num_blocks": 7936, 00:23:50.862 "uuid": "75517d3f-3214-4611-9602-999927e0af41", 00:23:50.862 "md_size": 32, 00:23:50.862 "md_interleave": true, 00:23:50.862 "dif_type": 0, 00:23:50.862 "assigned_rate_limits": { 00:23:50.862 "rw_ios_per_sec": 0, 00:23:50.862 "rw_mbytes_per_sec": 0, 00:23:50.862 "r_mbytes_per_sec": 0, 00:23:50.862 "w_mbytes_per_sec": 0 00:23:50.862 }, 00:23:50.862 "claimed": false, 00:23:50.862 "zoned": false, 00:23:50.862 "supported_io_types": { 00:23:50.862 "read": true, 00:23:50.862 "write": true, 00:23:50.862 "unmap": false, 00:23:50.862 "flush": false, 00:23:50.862 "reset": true, 00:23:50.862 "nvme_admin": false, 00:23:50.862 "nvme_io": false, 00:23:50.862 "nvme_io_md": false, 00:23:50.862 "write_zeroes": true, 00:23:50.862 "zcopy": false, 00:23:50.862 "get_zone_info": false, 00:23:50.862 "zone_management": false, 00:23:50.862 "zone_append": false, 00:23:50.862 "compare": false, 00:23:50.862 "compare_and_write": false, 00:23:50.862 "abort": false, 00:23:50.862 "seek_hole": false, 00:23:50.862 "seek_data": false, 00:23:50.862 "copy": false, 00:23:50.862 "nvme_iov_md": false 00:23:50.862 }, 00:23:50.862 "memory_domains": [ 00:23:50.862 { 00:23:50.862 "dma_device_id": "system", 00:23:50.862 "dma_device_type": 1 00:23:50.862 }, 00:23:50.862 { 00:23:50.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:50.862 "dma_device_type": 2 00:23:50.862 }, 00:23:50.862 { 00:23:50.862 "dma_device_id": "system", 00:23:50.862 "dma_device_type": 1 00:23:50.862 }, 00:23:50.862 { 00:23:50.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:50.862 "dma_device_type": 2 00:23:50.862 } 00:23:50.862 ], 00:23:50.862 "driver_specific": { 00:23:50.862 "raid": { 00:23:50.862 "uuid": "75517d3f-3214-4611-9602-999927e0af41", 00:23:50.862 "strip_size_kb": 0, 00:23:50.862 "state": "online", 00:23:50.862 "raid_level": "raid1", 00:23:50.862 "superblock": true, 00:23:50.862 "num_base_bdevs": 2, 00:23:50.862 "num_base_bdevs_discovered": 2, 00:23:50.862 "num_base_bdevs_operational": 2, 00:23:50.862 "base_bdevs_list": [ 00:23:50.862 { 00:23:50.862 "name": "pt1", 00:23:50.862 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:50.862 "is_configured": true, 00:23:50.862 "data_offset": 256, 00:23:50.862 "data_size": 7936 00:23:50.862 }, 00:23:50.862 { 00:23:50.862 "name": "pt2", 00:23:50.862 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:50.862 "is_configured": true, 00:23:50.862 "data_offset": 256, 00:23:50.862 "data_size": 7936 00:23:50.862 } 00:23:50.862 ] 00:23:50.862 } 00:23:50.862 } 00:23:50.862 }' 00:23:50.862 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:50.862 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:23:50.862 pt2' 00:23:50.862 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:50.862 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:23:50.862 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:50.862 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:23:50.862 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:50.862 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.862 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:50.862 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.122 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:23:51.122 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:23:51.122 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:51.122 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:23:51.122 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:51.122 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.122 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:51.122 12:22:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.122 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:23:51.122 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:23:51.122 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:51.122 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:23:51.122 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.122 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:51.122 [2024-11-25 12:22:47.018687] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:51.122 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.122 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=75517d3f-3214-4611-9602-999927e0af41 00:23:51.122 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 75517d3f-3214-4611-9602-999927e0af41 ']' 00:23:51.122 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:51.122 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.123 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:51.123 [2024-11-25 12:22:47.066341] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:51.123 [2024-11-25 12:22:47.066395] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:51.123 [2024-11-25 12:22:47.066499] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:51.123 [2024-11-25 12:22:47.066601] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:51.123 [2024-11-25 12:22:47.066622] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:23:51.123 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.123 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:51.123 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.123 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:51.123 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:23:51.123 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.123 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:23:51.123 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:23:51.123 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:51.123 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:23:51.123 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.123 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:51.123 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.123 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:51.123 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:23:51.123 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.123 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:51.123 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.123 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:23:51.123 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.123 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:23:51.123 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:51.123 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.123 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:23:51.123 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:23:51.123 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:23:51.123 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:23:51.123 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:51.123 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:51.123 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:51.123 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:51.123 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:23:51.123 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.123 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:51.123 [2024-11-25 12:22:47.206435] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:23:51.123 [2024-11-25 12:22:47.208962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:23:51.123 [2024-11-25 12:22:47.209074] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:23:51.123 [2024-11-25 12:22:47.209158] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:23:51.123 [2024-11-25 12:22:47.209185] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:51.123 [2024-11-25 12:22:47.209201] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:23:51.398 request: 00:23:51.398 { 00:23:51.398 "name": "raid_bdev1", 00:23:51.398 "raid_level": "raid1", 00:23:51.398 "base_bdevs": [ 00:23:51.398 "malloc1", 00:23:51.398 "malloc2" 00:23:51.398 ], 00:23:51.398 "superblock": false, 00:23:51.398 "method": "bdev_raid_create", 00:23:51.398 "req_id": 1 00:23:51.398 } 00:23:51.398 Got JSON-RPC error response 00:23:51.398 response: 00:23:51.398 { 00:23:51.398 "code": -17, 00:23:51.398 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:23:51.398 } 00:23:51.398 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:51.398 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:23:51.398 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:51.398 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:51.398 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:51.398 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:51.398 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:23:51.398 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.398 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:51.398 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.398 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:23:51.398 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:23:51.398 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:51.398 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.398 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:51.398 [2024-11-25 12:22:47.274422] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:51.398 [2024-11-25 12:22:47.274630] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:51.398 [2024-11-25 12:22:47.274700] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:23:51.398 [2024-11-25 12:22:47.274916] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:51.398 [2024-11-25 12:22:47.277493] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:51.398 [2024-11-25 12:22:47.277659] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:51.398 [2024-11-25 12:22:47.277847] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:51.398 [2024-11-25 12:22:47.278063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:51.398 pt1 00:23:51.398 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.398 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:23:51.398 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:51.398 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:51.398 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:51.398 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:51.398 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:51.398 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:51.398 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:51.398 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:51.398 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:51.398 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:51.398 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:51.398 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.398 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:51.398 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.398 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:51.398 "name": "raid_bdev1", 00:23:51.398 "uuid": "75517d3f-3214-4611-9602-999927e0af41", 00:23:51.398 "strip_size_kb": 0, 00:23:51.398 "state": "configuring", 00:23:51.398 "raid_level": "raid1", 00:23:51.398 "superblock": true, 00:23:51.398 "num_base_bdevs": 2, 00:23:51.398 "num_base_bdevs_discovered": 1, 00:23:51.398 "num_base_bdevs_operational": 2, 00:23:51.398 "base_bdevs_list": [ 00:23:51.398 { 00:23:51.398 "name": "pt1", 00:23:51.398 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:51.398 "is_configured": true, 00:23:51.398 "data_offset": 256, 00:23:51.398 "data_size": 7936 00:23:51.398 }, 00:23:51.398 { 00:23:51.398 "name": null, 00:23:51.398 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:51.398 "is_configured": false, 00:23:51.398 "data_offset": 256, 00:23:51.398 "data_size": 7936 00:23:51.398 } 00:23:51.398 ] 00:23:51.398 }' 00:23:51.398 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:51.398 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:51.723 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:23:51.723 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:23:51.723 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:51.723 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:51.723 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.723 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:51.723 [2024-11-25 12:22:47.786599] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:51.723 [2024-11-25 12:22:47.786696] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:51.723 [2024-11-25 12:22:47.786734] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:23:51.723 [2024-11-25 12:22:47.786753] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:51.723 [2024-11-25 12:22:47.786961] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:51.723 [2024-11-25 12:22:47.787003] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:51.723 [2024-11-25 12:22:47.787068] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:51.723 [2024-11-25 12:22:47.787106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:51.723 [2024-11-25 12:22:47.787218] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:23:51.723 [2024-11-25 12:22:47.787238] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:51.723 [2024-11-25 12:22:47.787337] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:23:51.723 [2024-11-25 12:22:47.787456] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:23:51.723 [2024-11-25 12:22:47.787525] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:23:51.723 [2024-11-25 12:22:47.787635] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:51.723 pt2 00:23:51.723 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.723 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:23:51.723 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:51.723 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:51.723 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:51.723 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:51.723 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:51.723 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:51.723 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:51.723 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:51.723 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:51.723 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:51.723 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:51.723 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:51.723 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:51.723 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.723 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:51.981 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.981 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:51.981 "name": "raid_bdev1", 00:23:51.981 "uuid": "75517d3f-3214-4611-9602-999927e0af41", 00:23:51.981 "strip_size_kb": 0, 00:23:51.981 "state": "online", 00:23:51.981 "raid_level": "raid1", 00:23:51.981 "superblock": true, 00:23:51.981 "num_base_bdevs": 2, 00:23:51.981 "num_base_bdevs_discovered": 2, 00:23:51.981 "num_base_bdevs_operational": 2, 00:23:51.981 "base_bdevs_list": [ 00:23:51.981 { 00:23:51.981 "name": "pt1", 00:23:51.981 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:51.981 "is_configured": true, 00:23:51.981 "data_offset": 256, 00:23:51.981 "data_size": 7936 00:23:51.981 }, 00:23:51.981 { 00:23:51.981 "name": "pt2", 00:23:51.981 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:51.981 "is_configured": true, 00:23:51.981 "data_offset": 256, 00:23:51.981 "data_size": 7936 00:23:51.981 } 00:23:51.981 ] 00:23:51.981 }' 00:23:51.981 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:51.981 12:22:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:52.242 12:22:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:23:52.242 12:22:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:23:52.242 12:22:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:52.242 12:22:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:52.242 12:22:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:23:52.242 12:22:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:52.242 12:22:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:52.242 12:22:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:52.242 12:22:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.242 12:22:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:52.242 [2024-11-25 12:22:48.315091] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:52.502 12:22:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.502 12:22:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:52.502 "name": "raid_bdev1", 00:23:52.502 "aliases": [ 00:23:52.502 "75517d3f-3214-4611-9602-999927e0af41" 00:23:52.502 ], 00:23:52.502 "product_name": "Raid Volume", 00:23:52.502 "block_size": 4128, 00:23:52.502 "num_blocks": 7936, 00:23:52.502 "uuid": "75517d3f-3214-4611-9602-999927e0af41", 00:23:52.502 "md_size": 32, 00:23:52.502 "md_interleave": true, 00:23:52.502 "dif_type": 0, 00:23:52.502 "assigned_rate_limits": { 00:23:52.502 "rw_ios_per_sec": 0, 00:23:52.502 "rw_mbytes_per_sec": 0, 00:23:52.502 "r_mbytes_per_sec": 0, 00:23:52.502 "w_mbytes_per_sec": 0 00:23:52.502 }, 00:23:52.502 "claimed": false, 00:23:52.502 "zoned": false, 00:23:52.502 "supported_io_types": { 00:23:52.502 "read": true, 00:23:52.502 "write": true, 00:23:52.502 "unmap": false, 00:23:52.502 "flush": false, 00:23:52.502 "reset": true, 00:23:52.502 "nvme_admin": false, 00:23:52.502 "nvme_io": false, 00:23:52.502 "nvme_io_md": false, 00:23:52.502 "write_zeroes": true, 00:23:52.502 "zcopy": false, 00:23:52.502 "get_zone_info": false, 00:23:52.502 "zone_management": false, 00:23:52.502 "zone_append": false, 00:23:52.502 "compare": false, 00:23:52.502 "compare_and_write": false, 00:23:52.502 "abort": false, 00:23:52.502 "seek_hole": false, 00:23:52.502 "seek_data": false, 00:23:52.502 "copy": false, 00:23:52.502 "nvme_iov_md": false 00:23:52.502 }, 00:23:52.502 "memory_domains": [ 00:23:52.502 { 00:23:52.502 "dma_device_id": "system", 00:23:52.502 "dma_device_type": 1 00:23:52.502 }, 00:23:52.502 { 00:23:52.502 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:52.502 "dma_device_type": 2 00:23:52.502 }, 00:23:52.502 { 00:23:52.502 "dma_device_id": "system", 00:23:52.502 "dma_device_type": 1 00:23:52.502 }, 00:23:52.502 { 00:23:52.502 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:52.502 "dma_device_type": 2 00:23:52.502 } 00:23:52.502 ], 00:23:52.502 "driver_specific": { 00:23:52.502 "raid": { 00:23:52.502 "uuid": "75517d3f-3214-4611-9602-999927e0af41", 00:23:52.502 "strip_size_kb": 0, 00:23:52.502 "state": "online", 00:23:52.502 "raid_level": "raid1", 00:23:52.502 "superblock": true, 00:23:52.502 "num_base_bdevs": 2, 00:23:52.502 "num_base_bdevs_discovered": 2, 00:23:52.502 "num_base_bdevs_operational": 2, 00:23:52.502 "base_bdevs_list": [ 00:23:52.502 { 00:23:52.502 "name": "pt1", 00:23:52.502 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:52.502 "is_configured": true, 00:23:52.502 "data_offset": 256, 00:23:52.502 "data_size": 7936 00:23:52.502 }, 00:23:52.502 { 00:23:52.502 "name": "pt2", 00:23:52.502 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:52.502 "is_configured": true, 00:23:52.502 "data_offset": 256, 00:23:52.502 "data_size": 7936 00:23:52.502 } 00:23:52.502 ] 00:23:52.502 } 00:23:52.502 } 00:23:52.502 }' 00:23:52.502 12:22:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:52.502 12:22:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:23:52.502 pt2' 00:23:52.502 12:22:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:52.502 12:22:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:23:52.502 12:22:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:52.502 12:22:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:52.502 12:22:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:23:52.502 12:22:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.502 12:22:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:52.502 12:22:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.502 12:22:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:23:52.502 12:22:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:23:52.502 12:22:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:52.502 12:22:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:23:52.502 12:22:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.502 12:22:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:52.502 12:22:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:52.502 12:22:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.502 12:22:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:23:52.502 12:22:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:23:52.502 12:22:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:52.502 12:22:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.502 12:22:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:52.502 12:22:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:23:52.761 [2024-11-25 12:22:48.591126] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:52.761 12:22:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.761 12:22:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 75517d3f-3214-4611-9602-999927e0af41 '!=' 75517d3f-3214-4611-9602-999927e0af41 ']' 00:23:52.761 12:22:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:23:52.761 12:22:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:52.761 12:22:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:23:52.761 12:22:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:23:52.761 12:22:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.761 12:22:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:52.761 [2024-11-25 12:22:48.646826] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:23:52.761 12:22:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.761 12:22:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:52.761 12:22:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:52.761 12:22:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:52.761 12:22:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:52.761 12:22:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:52.761 12:22:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:52.761 12:22:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:52.761 12:22:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:52.761 12:22:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:52.761 12:22:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:52.761 12:22:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:52.761 12:22:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.761 12:22:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:52.761 12:22:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:52.761 12:22:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.761 12:22:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:52.761 "name": "raid_bdev1", 00:23:52.761 "uuid": "75517d3f-3214-4611-9602-999927e0af41", 00:23:52.761 "strip_size_kb": 0, 00:23:52.761 "state": "online", 00:23:52.761 "raid_level": "raid1", 00:23:52.761 "superblock": true, 00:23:52.761 "num_base_bdevs": 2, 00:23:52.761 "num_base_bdevs_discovered": 1, 00:23:52.761 "num_base_bdevs_operational": 1, 00:23:52.761 "base_bdevs_list": [ 00:23:52.761 { 00:23:52.761 "name": null, 00:23:52.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:52.761 "is_configured": false, 00:23:52.761 "data_offset": 0, 00:23:52.761 "data_size": 7936 00:23:52.761 }, 00:23:52.761 { 00:23:52.761 "name": "pt2", 00:23:52.761 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:52.761 "is_configured": true, 00:23:52.762 "data_offset": 256, 00:23:52.762 "data_size": 7936 00:23:52.762 } 00:23:52.762 ] 00:23:52.762 }' 00:23:52.762 12:22:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:52.762 12:22:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:53.326 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:53.326 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.326 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:53.326 [2024-11-25 12:22:49.150945] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:53.326 [2024-11-25 12:22:49.151144] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:53.326 [2024-11-25 12:22:49.151267] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:53.326 [2024-11-25 12:22:49.151336] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:53.326 [2024-11-25 12:22:49.151375] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:23:53.326 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.326 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:53.326 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:23:53.326 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.326 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:53.326 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.326 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:23:53.327 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:23:53.327 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:23:53.327 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:23:53.327 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:23:53.327 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.327 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:53.327 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.327 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:23:53.327 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:23:53.327 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:23:53.327 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:23:53.327 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:23:53.327 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:53.327 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.327 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:53.327 [2024-11-25 12:22:49.226956] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:53.327 [2024-11-25 12:22:49.227026] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:53.327 [2024-11-25 12:22:49.227052] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:23:53.327 [2024-11-25 12:22:49.227070] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:53.327 [2024-11-25 12:22:49.229618] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:53.327 [2024-11-25 12:22:49.229787] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:53.327 [2024-11-25 12:22:49.229872] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:53.327 [2024-11-25 12:22:49.229939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:53.327 [2024-11-25 12:22:49.230039] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:23:53.327 [2024-11-25 12:22:49.230061] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:53.327 [2024-11-25 12:22:49.230177] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:23:53.327 [2024-11-25 12:22:49.230270] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:23:53.327 [2024-11-25 12:22:49.230285] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:23:53.327 [2024-11-25 12:22:49.230405] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:53.327 pt2 00:23:53.327 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.327 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:53.327 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:53.327 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:53.327 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:53.327 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:53.327 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:53.327 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:53.327 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:53.327 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:53.327 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:53.327 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:53.327 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:53.327 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.327 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:53.327 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.327 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:53.327 "name": "raid_bdev1", 00:23:53.327 "uuid": "75517d3f-3214-4611-9602-999927e0af41", 00:23:53.327 "strip_size_kb": 0, 00:23:53.327 "state": "online", 00:23:53.327 "raid_level": "raid1", 00:23:53.327 "superblock": true, 00:23:53.327 "num_base_bdevs": 2, 00:23:53.327 "num_base_bdevs_discovered": 1, 00:23:53.327 "num_base_bdevs_operational": 1, 00:23:53.327 "base_bdevs_list": [ 00:23:53.327 { 00:23:53.327 "name": null, 00:23:53.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:53.327 "is_configured": false, 00:23:53.327 "data_offset": 256, 00:23:53.327 "data_size": 7936 00:23:53.327 }, 00:23:53.327 { 00:23:53.327 "name": "pt2", 00:23:53.327 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:53.327 "is_configured": true, 00:23:53.327 "data_offset": 256, 00:23:53.327 "data_size": 7936 00:23:53.327 } 00:23:53.327 ] 00:23:53.327 }' 00:23:53.327 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:53.327 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:53.894 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:53.894 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.894 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:53.894 [2024-11-25 12:22:49.731072] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:53.894 [2024-11-25 12:22:49.731235] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:53.894 [2024-11-25 12:22:49.731449] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:53.894 [2024-11-25 12:22:49.731640] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:53.894 [2024-11-25 12:22:49.731801] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:23:53.894 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.894 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:53.894 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.894 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:23:53.894 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:53.894 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.894 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:23:53.894 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:23:53.894 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:23:53.894 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:53.894 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.894 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:53.894 [2024-11-25 12:22:49.791091] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:53.894 [2024-11-25 12:22:49.791289] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:53.894 [2024-11-25 12:22:49.791331] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:23:53.894 [2024-11-25 12:22:49.791367] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:53.894 [2024-11-25 12:22:49.793917] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:53.894 [2024-11-25 12:22:49.793956] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:53.894 [2024-11-25 12:22:49.794045] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:53.894 [2024-11-25 12:22:49.794103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:53.894 [2024-11-25 12:22:49.794236] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:23:53.894 [2024-11-25 12:22:49.794253] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:53.894 [2024-11-25 12:22:49.794275] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:23:53.894 [2024-11-25 12:22:49.794357] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:53.894 [2024-11-25 12:22:49.794485] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:23:53.894 [2024-11-25 12:22:49.794501] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:53.894 [2024-11-25 12:22:49.794610] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:23:53.894 [2024-11-25 12:22:49.794699] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:23:53.894 [2024-11-25 12:22:49.794718] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:23:53.894 [2024-11-25 12:22:49.794810] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:53.894 pt1 00:23:53.894 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.894 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:23:53.894 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:53.894 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:53.894 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:53.894 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:53.894 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:53.894 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:53.894 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:53.894 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:53.894 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:53.894 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:53.894 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:53.894 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:53.894 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.894 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:53.894 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.894 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:53.894 "name": "raid_bdev1", 00:23:53.894 "uuid": "75517d3f-3214-4611-9602-999927e0af41", 00:23:53.894 "strip_size_kb": 0, 00:23:53.894 "state": "online", 00:23:53.894 "raid_level": "raid1", 00:23:53.894 "superblock": true, 00:23:53.894 "num_base_bdevs": 2, 00:23:53.894 "num_base_bdevs_discovered": 1, 00:23:53.894 "num_base_bdevs_operational": 1, 00:23:53.894 "base_bdevs_list": [ 00:23:53.894 { 00:23:53.894 "name": null, 00:23:53.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:53.894 "is_configured": false, 00:23:53.894 "data_offset": 256, 00:23:53.894 "data_size": 7936 00:23:53.894 }, 00:23:53.894 { 00:23:53.894 "name": "pt2", 00:23:53.894 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:53.894 "is_configured": true, 00:23:53.894 "data_offset": 256, 00:23:53.894 "data_size": 7936 00:23:53.894 } 00:23:53.894 ] 00:23:53.894 }' 00:23:53.894 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:53.894 12:22:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:54.461 12:22:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:23:54.461 12:22:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:23:54.461 12:22:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.461 12:22:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:54.461 12:22:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.461 12:22:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:23:54.461 12:22:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:23:54.461 12:22:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:54.462 12:22:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.462 12:22:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:54.462 [2024-11-25 12:22:50.363533] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:54.462 12:22:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.462 12:22:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 75517d3f-3214-4611-9602-999927e0af41 '!=' 75517d3f-3214-4611-9602-999927e0af41 ']' 00:23:54.462 12:22:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 89161 00:23:54.462 12:22:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89161 ']' 00:23:54.462 12:22:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89161 00:23:54.462 12:22:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:23:54.462 12:22:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:54.462 12:22:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89161 00:23:54.462 12:22:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:54.462 12:22:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:54.462 12:22:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89161' 00:23:54.462 killing process with pid 89161 00:23:54.462 12:22:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 89161 00:23:54.462 [2024-11-25 12:22:50.434966] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:54.462 12:22:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 89161 00:23:54.462 [2024-11-25 12:22:50.435268] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:54.462 [2024-11-25 12:22:50.435467] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:54.462 [2024-11-25 12:22:50.435620] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:23:54.719 [2024-11-25 12:22:50.619354] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:55.656 12:22:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:23:55.656 00:23:55.656 real 0m6.638s 00:23:55.656 user 0m10.489s 00:23:55.656 sys 0m0.974s 00:23:55.656 12:22:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:55.656 12:22:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:55.656 ************************************ 00:23:55.656 END TEST raid_superblock_test_md_interleaved 00:23:55.656 ************************************ 00:23:55.656 12:22:51 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:23:55.656 12:22:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:23:55.656 12:22:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:55.656 12:22:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:55.656 ************************************ 00:23:55.656 START TEST raid_rebuild_test_sb_md_interleaved 00:23:55.656 ************************************ 00:23:55.656 12:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:23:55.656 12:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:23:55.656 12:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:23:55.656 12:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:23:55.656 12:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:23:55.656 12:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:23:55.656 12:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:23:55.656 12:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:55.656 12:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:23:55.656 12:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:55.656 12:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:55.656 12:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:23:55.656 12:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:55.656 12:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:55.656 12:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:55.656 12:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:23:55.656 12:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:23:55.656 12:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:23:55.656 12:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:23:55.656 12:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:23:55.656 12:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:23:55.656 12:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:23:55.656 12:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:23:55.656 12:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:23:55.656 12:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:23:55.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:55.656 12:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89495 00:23:55.656 12:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89495 00:23:55.656 12:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89495 ']' 00:23:55.656 12:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:55.656 12:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:55.656 12:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:55.656 12:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:55.656 12:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:55.656 12:22:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:55.915 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:55.915 Zero copy mechanism will not be used. 00:23:55.915 [2024-11-25 12:22:51.823047] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:23:55.915 [2024-11-25 12:22:51.823208] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89495 ] 00:23:55.915 [2024-11-25 12:22:51.998190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:56.173 [2024-11-25 12:22:52.132797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:56.432 [2024-11-25 12:22:52.378655] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:56.432 [2024-11-25 12:22:52.378738] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:57.000 12:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:57.000 12:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:23:57.000 12:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:57.000 12:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:23:57.000 12:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.000 12:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:57.000 BaseBdev1_malloc 00:23:57.000 12:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.000 12:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:57.000 12:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.000 12:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:57.000 [2024-11-25 12:22:52.836552] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:57.000 [2024-11-25 12:22:52.836626] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:57.000 [2024-11-25 12:22:52.836656] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:57.000 [2024-11-25 12:22:52.836674] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:57.000 [2024-11-25 12:22:52.839272] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:57.000 [2024-11-25 12:22:52.839506] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:57.000 BaseBdev1 00:23:57.000 12:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.000 12:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:57.000 12:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:23:57.000 12:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.000 12:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:57.000 BaseBdev2_malloc 00:23:57.000 12:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.000 12:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:57.000 12:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.000 12:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:57.000 [2024-11-25 12:22:52.885491] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:57.000 [2024-11-25 12:22:52.885734] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:57.000 [2024-11-25 12:22:52.885783] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:57.000 [2024-11-25 12:22:52.885802] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:57.000 [2024-11-25 12:22:52.888277] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:57.000 [2024-11-25 12:22:52.888353] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:57.000 BaseBdev2 00:23:57.000 12:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.000 12:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:23:57.000 12:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.000 12:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:57.000 spare_malloc 00:23:57.001 12:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.001 12:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:57.001 12:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.001 12:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:57.001 spare_delay 00:23:57.001 12:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.001 12:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:57.001 12:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.001 12:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:57.001 [2024-11-25 12:22:52.954619] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:57.001 [2024-11-25 12:22:52.954705] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:57.001 [2024-11-25 12:22:52.954741] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:23:57.001 [2024-11-25 12:22:52.954759] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:57.001 [2024-11-25 12:22:52.957339] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:57.001 [2024-11-25 12:22:52.957405] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:57.001 spare 00:23:57.001 12:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.001 12:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:23:57.001 12:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.001 12:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:57.001 [2024-11-25 12:22:52.962678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:57.001 [2024-11-25 12:22:52.965194] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:57.001 [2024-11-25 12:22:52.965514] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:57.001 [2024-11-25 12:22:52.965539] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:57.001 [2024-11-25 12:22:52.965653] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:23:57.001 [2024-11-25 12:22:52.965764] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:57.001 [2024-11-25 12:22:52.965780] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:23:57.001 [2024-11-25 12:22:52.965880] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:57.001 12:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.001 12:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:57.001 12:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:57.001 12:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:57.001 12:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:57.001 12:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:57.001 12:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:57.001 12:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:57.001 12:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:57.001 12:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:57.001 12:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:57.001 12:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:57.001 12:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:57.001 12:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.001 12:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:57.001 12:22:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.001 12:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:57.001 "name": "raid_bdev1", 00:23:57.001 "uuid": "46a30270-d2ce-4f72-99af-ff76aab1854f", 00:23:57.001 "strip_size_kb": 0, 00:23:57.001 "state": "online", 00:23:57.001 "raid_level": "raid1", 00:23:57.001 "superblock": true, 00:23:57.001 "num_base_bdevs": 2, 00:23:57.001 "num_base_bdevs_discovered": 2, 00:23:57.001 "num_base_bdevs_operational": 2, 00:23:57.001 "base_bdevs_list": [ 00:23:57.001 { 00:23:57.001 "name": "BaseBdev1", 00:23:57.001 "uuid": "2861656d-0bf1-5d85-9b64-7bf68982176d", 00:23:57.001 "is_configured": true, 00:23:57.001 "data_offset": 256, 00:23:57.001 "data_size": 7936 00:23:57.001 }, 00:23:57.001 { 00:23:57.001 "name": "BaseBdev2", 00:23:57.001 "uuid": "cf4fabcc-eb03-5d7e-90b8-3a41c3d03757", 00:23:57.001 "is_configured": true, 00:23:57.001 "data_offset": 256, 00:23:57.001 "data_size": 7936 00:23:57.001 } 00:23:57.001 ] 00:23:57.001 }' 00:23:57.001 12:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:57.001 12:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:57.569 12:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:57.569 12:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:23:57.569 12:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.569 12:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:57.569 [2024-11-25 12:22:53.471183] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:57.569 12:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.569 12:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:23:57.569 12:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:57.569 12:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.569 12:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:57.569 12:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:57.569 12:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.569 12:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:23:57.569 12:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:23:57.569 12:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:23:57.569 12:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:23:57.569 12:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.569 12:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:57.569 [2024-11-25 12:22:53.570801] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:57.569 12:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.569 12:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:57.569 12:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:57.569 12:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:57.569 12:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:57.569 12:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:57.569 12:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:57.569 12:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:57.569 12:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:57.569 12:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:57.569 12:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:57.569 12:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:57.569 12:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:57.569 12:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.569 12:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:57.569 12:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.569 12:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:57.569 "name": "raid_bdev1", 00:23:57.569 "uuid": "46a30270-d2ce-4f72-99af-ff76aab1854f", 00:23:57.569 "strip_size_kb": 0, 00:23:57.569 "state": "online", 00:23:57.569 "raid_level": "raid1", 00:23:57.569 "superblock": true, 00:23:57.569 "num_base_bdevs": 2, 00:23:57.569 "num_base_bdevs_discovered": 1, 00:23:57.569 "num_base_bdevs_operational": 1, 00:23:57.569 "base_bdevs_list": [ 00:23:57.569 { 00:23:57.569 "name": null, 00:23:57.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:57.569 "is_configured": false, 00:23:57.569 "data_offset": 0, 00:23:57.569 "data_size": 7936 00:23:57.569 }, 00:23:57.569 { 00:23:57.569 "name": "BaseBdev2", 00:23:57.569 "uuid": "cf4fabcc-eb03-5d7e-90b8-3a41c3d03757", 00:23:57.569 "is_configured": true, 00:23:57.569 "data_offset": 256, 00:23:57.569 "data_size": 7936 00:23:57.569 } 00:23:57.569 ] 00:23:57.569 }' 00:23:57.569 12:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:57.569 12:22:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:58.214 12:22:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:58.214 12:22:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.214 12:22:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:58.214 [2024-11-25 12:22:54.115030] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:58.214 [2024-11-25 12:22:54.132273] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:23:58.214 12:22:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.214 12:22:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:23:58.215 [2024-11-25 12:22:54.135101] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:59.149 12:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:59.149 12:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:59.149 12:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:59.149 12:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:59.149 12:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:59.149 12:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:59.149 12:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.149 12:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:59.149 12:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:59.149 12:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.149 12:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:59.149 "name": "raid_bdev1", 00:23:59.149 "uuid": "46a30270-d2ce-4f72-99af-ff76aab1854f", 00:23:59.149 "strip_size_kb": 0, 00:23:59.149 "state": "online", 00:23:59.149 "raid_level": "raid1", 00:23:59.149 "superblock": true, 00:23:59.149 "num_base_bdevs": 2, 00:23:59.149 "num_base_bdevs_discovered": 2, 00:23:59.149 "num_base_bdevs_operational": 2, 00:23:59.149 "process": { 00:23:59.149 "type": "rebuild", 00:23:59.149 "target": "spare", 00:23:59.149 "progress": { 00:23:59.150 "blocks": 2560, 00:23:59.150 "percent": 32 00:23:59.150 } 00:23:59.150 }, 00:23:59.150 "base_bdevs_list": [ 00:23:59.150 { 00:23:59.150 "name": "spare", 00:23:59.150 "uuid": "a419033e-486a-5eff-a4e0-fd7aa590d8db", 00:23:59.150 "is_configured": true, 00:23:59.150 "data_offset": 256, 00:23:59.150 "data_size": 7936 00:23:59.150 }, 00:23:59.150 { 00:23:59.150 "name": "BaseBdev2", 00:23:59.150 "uuid": "cf4fabcc-eb03-5d7e-90b8-3a41c3d03757", 00:23:59.150 "is_configured": true, 00:23:59.150 "data_offset": 256, 00:23:59.150 "data_size": 7936 00:23:59.150 } 00:23:59.150 ] 00:23:59.150 }' 00:23:59.150 12:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:59.150 12:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:59.150 12:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:59.408 12:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:59.408 12:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:23:59.408 12:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.408 12:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:59.408 [2024-11-25 12:22:55.284319] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:59.408 [2024-11-25 12:22:55.344216] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:59.408 [2024-11-25 12:22:55.344314] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:59.408 [2024-11-25 12:22:55.344361] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:59.408 [2024-11-25 12:22:55.344386] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:59.408 12:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.408 12:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:59.408 12:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:59.408 12:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:59.408 12:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:59.408 12:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:59.408 12:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:59.408 12:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:59.408 12:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:59.408 12:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:59.408 12:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:59.408 12:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:59.408 12:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.408 12:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:59.408 12:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:59.408 12:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.408 12:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:59.408 "name": "raid_bdev1", 00:23:59.408 "uuid": "46a30270-d2ce-4f72-99af-ff76aab1854f", 00:23:59.408 "strip_size_kb": 0, 00:23:59.408 "state": "online", 00:23:59.408 "raid_level": "raid1", 00:23:59.408 "superblock": true, 00:23:59.408 "num_base_bdevs": 2, 00:23:59.408 "num_base_bdevs_discovered": 1, 00:23:59.408 "num_base_bdevs_operational": 1, 00:23:59.409 "base_bdevs_list": [ 00:23:59.409 { 00:23:59.409 "name": null, 00:23:59.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:59.409 "is_configured": false, 00:23:59.409 "data_offset": 0, 00:23:59.409 "data_size": 7936 00:23:59.409 }, 00:23:59.409 { 00:23:59.409 "name": "BaseBdev2", 00:23:59.409 "uuid": "cf4fabcc-eb03-5d7e-90b8-3a41c3d03757", 00:23:59.409 "is_configured": true, 00:23:59.409 "data_offset": 256, 00:23:59.409 "data_size": 7936 00:23:59.409 } 00:23:59.409 ] 00:23:59.409 }' 00:23:59.409 12:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:59.409 12:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:59.975 12:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:59.975 12:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:59.975 12:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:59.975 12:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:59.975 12:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:59.975 12:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:59.975 12:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:59.975 12:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.975 12:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:59.975 12:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.975 12:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:59.975 "name": "raid_bdev1", 00:23:59.975 "uuid": "46a30270-d2ce-4f72-99af-ff76aab1854f", 00:23:59.975 "strip_size_kb": 0, 00:23:59.975 "state": "online", 00:23:59.975 "raid_level": "raid1", 00:23:59.975 "superblock": true, 00:23:59.975 "num_base_bdevs": 2, 00:23:59.975 "num_base_bdevs_discovered": 1, 00:23:59.975 "num_base_bdevs_operational": 1, 00:23:59.975 "base_bdevs_list": [ 00:23:59.975 { 00:23:59.975 "name": null, 00:23:59.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:59.975 "is_configured": false, 00:23:59.975 "data_offset": 0, 00:23:59.975 "data_size": 7936 00:23:59.975 }, 00:23:59.975 { 00:23:59.975 "name": "BaseBdev2", 00:23:59.975 "uuid": "cf4fabcc-eb03-5d7e-90b8-3a41c3d03757", 00:23:59.975 "is_configured": true, 00:23:59.975 "data_offset": 256, 00:23:59.975 "data_size": 7936 00:23:59.975 } 00:23:59.975 ] 00:23:59.975 }' 00:23:59.975 12:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:59.975 12:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:59.975 12:22:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:59.975 12:22:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:59.975 12:22:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:59.975 12:22:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.975 12:22:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:59.975 [2024-11-25 12:22:56.036737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:59.975 [2024-11-25 12:22:56.052779] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:23:59.975 12:22:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.975 12:22:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:23:59.975 [2024-11-25 12:22:56.055497] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:01.351 12:22:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:01.351 12:22:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:01.351 12:22:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:01.351 12:22:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:01.351 12:22:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:01.351 12:22:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:01.351 12:22:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.351 12:22:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:01.351 12:22:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:01.351 12:22:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.351 12:22:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:01.351 "name": "raid_bdev1", 00:24:01.351 "uuid": "46a30270-d2ce-4f72-99af-ff76aab1854f", 00:24:01.351 "strip_size_kb": 0, 00:24:01.351 "state": "online", 00:24:01.351 "raid_level": "raid1", 00:24:01.351 "superblock": true, 00:24:01.351 "num_base_bdevs": 2, 00:24:01.351 "num_base_bdevs_discovered": 2, 00:24:01.351 "num_base_bdevs_operational": 2, 00:24:01.351 "process": { 00:24:01.351 "type": "rebuild", 00:24:01.351 "target": "spare", 00:24:01.351 "progress": { 00:24:01.351 "blocks": 2560, 00:24:01.351 "percent": 32 00:24:01.351 } 00:24:01.351 }, 00:24:01.351 "base_bdevs_list": [ 00:24:01.351 { 00:24:01.351 "name": "spare", 00:24:01.351 "uuid": "a419033e-486a-5eff-a4e0-fd7aa590d8db", 00:24:01.351 "is_configured": true, 00:24:01.351 "data_offset": 256, 00:24:01.351 "data_size": 7936 00:24:01.351 }, 00:24:01.351 { 00:24:01.351 "name": "BaseBdev2", 00:24:01.351 "uuid": "cf4fabcc-eb03-5d7e-90b8-3a41c3d03757", 00:24:01.351 "is_configured": true, 00:24:01.351 "data_offset": 256, 00:24:01.351 "data_size": 7936 00:24:01.351 } 00:24:01.351 ] 00:24:01.351 }' 00:24:01.351 12:22:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:01.351 12:22:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:01.351 12:22:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:01.351 12:22:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:01.351 12:22:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:24:01.351 12:22:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:24:01.351 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:24:01.351 12:22:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:24:01.351 12:22:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:24:01.351 12:22:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:24:01.351 12:22:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=796 00:24:01.351 12:22:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:01.351 12:22:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:01.351 12:22:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:01.351 12:22:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:01.351 12:22:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:01.351 12:22:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:01.351 12:22:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:01.352 12:22:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.352 12:22:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:01.352 12:22:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:01.352 12:22:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.352 12:22:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:01.352 "name": "raid_bdev1", 00:24:01.352 "uuid": "46a30270-d2ce-4f72-99af-ff76aab1854f", 00:24:01.352 "strip_size_kb": 0, 00:24:01.352 "state": "online", 00:24:01.352 "raid_level": "raid1", 00:24:01.352 "superblock": true, 00:24:01.352 "num_base_bdevs": 2, 00:24:01.352 "num_base_bdevs_discovered": 2, 00:24:01.352 "num_base_bdevs_operational": 2, 00:24:01.352 "process": { 00:24:01.352 "type": "rebuild", 00:24:01.352 "target": "spare", 00:24:01.352 "progress": { 00:24:01.352 "blocks": 2816, 00:24:01.352 "percent": 35 00:24:01.352 } 00:24:01.352 }, 00:24:01.352 "base_bdevs_list": [ 00:24:01.352 { 00:24:01.352 "name": "spare", 00:24:01.352 "uuid": "a419033e-486a-5eff-a4e0-fd7aa590d8db", 00:24:01.352 "is_configured": true, 00:24:01.352 "data_offset": 256, 00:24:01.352 "data_size": 7936 00:24:01.352 }, 00:24:01.352 { 00:24:01.352 "name": "BaseBdev2", 00:24:01.352 "uuid": "cf4fabcc-eb03-5d7e-90b8-3a41c3d03757", 00:24:01.352 "is_configured": true, 00:24:01.352 "data_offset": 256, 00:24:01.352 "data_size": 7936 00:24:01.352 } 00:24:01.352 ] 00:24:01.352 }' 00:24:01.352 12:22:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:01.352 12:22:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:01.352 12:22:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:01.352 12:22:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:01.352 12:22:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:02.285 12:22:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:02.285 12:22:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:02.285 12:22:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:02.285 12:22:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:02.285 12:22:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:02.285 12:22:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:02.543 12:22:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:02.543 12:22:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:02.543 12:22:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.543 12:22:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:02.543 12:22:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.543 12:22:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:02.543 "name": "raid_bdev1", 00:24:02.543 "uuid": "46a30270-d2ce-4f72-99af-ff76aab1854f", 00:24:02.543 "strip_size_kb": 0, 00:24:02.543 "state": "online", 00:24:02.543 "raid_level": "raid1", 00:24:02.543 "superblock": true, 00:24:02.543 "num_base_bdevs": 2, 00:24:02.543 "num_base_bdevs_discovered": 2, 00:24:02.543 "num_base_bdevs_operational": 2, 00:24:02.543 "process": { 00:24:02.543 "type": "rebuild", 00:24:02.543 "target": "spare", 00:24:02.543 "progress": { 00:24:02.543 "blocks": 5888, 00:24:02.543 "percent": 74 00:24:02.543 } 00:24:02.543 }, 00:24:02.543 "base_bdevs_list": [ 00:24:02.543 { 00:24:02.543 "name": "spare", 00:24:02.543 "uuid": "a419033e-486a-5eff-a4e0-fd7aa590d8db", 00:24:02.543 "is_configured": true, 00:24:02.543 "data_offset": 256, 00:24:02.543 "data_size": 7936 00:24:02.543 }, 00:24:02.543 { 00:24:02.543 "name": "BaseBdev2", 00:24:02.543 "uuid": "cf4fabcc-eb03-5d7e-90b8-3a41c3d03757", 00:24:02.543 "is_configured": true, 00:24:02.543 "data_offset": 256, 00:24:02.543 "data_size": 7936 00:24:02.543 } 00:24:02.543 ] 00:24:02.543 }' 00:24:02.543 12:22:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:02.543 12:22:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:02.543 12:22:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:02.543 12:22:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:02.543 12:22:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:03.158 [2024-11-25 12:22:59.178565] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:03.158 [2024-11-25 12:22:59.178693] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:03.158 [2024-11-25 12:22:59.178855] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:03.723 12:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:03.723 12:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:03.723 12:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:03.723 12:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:03.723 12:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:03.723 12:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:03.723 12:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:03.723 12:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:03.723 12:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.723 12:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:03.723 12:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.723 12:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:03.723 "name": "raid_bdev1", 00:24:03.723 "uuid": "46a30270-d2ce-4f72-99af-ff76aab1854f", 00:24:03.723 "strip_size_kb": 0, 00:24:03.723 "state": "online", 00:24:03.723 "raid_level": "raid1", 00:24:03.723 "superblock": true, 00:24:03.723 "num_base_bdevs": 2, 00:24:03.723 "num_base_bdevs_discovered": 2, 00:24:03.723 "num_base_bdevs_operational": 2, 00:24:03.723 "base_bdevs_list": [ 00:24:03.723 { 00:24:03.723 "name": "spare", 00:24:03.723 "uuid": "a419033e-486a-5eff-a4e0-fd7aa590d8db", 00:24:03.723 "is_configured": true, 00:24:03.723 "data_offset": 256, 00:24:03.723 "data_size": 7936 00:24:03.723 }, 00:24:03.723 { 00:24:03.723 "name": "BaseBdev2", 00:24:03.723 "uuid": "cf4fabcc-eb03-5d7e-90b8-3a41c3d03757", 00:24:03.723 "is_configured": true, 00:24:03.723 "data_offset": 256, 00:24:03.723 "data_size": 7936 00:24:03.723 } 00:24:03.723 ] 00:24:03.723 }' 00:24:03.723 12:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:03.723 12:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:03.723 12:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:03.723 12:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:24:03.723 12:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:24:03.723 12:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:03.723 12:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:03.723 12:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:03.723 12:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:03.723 12:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:03.723 12:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:03.723 12:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:03.723 12:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.723 12:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:03.723 12:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.723 12:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:03.723 "name": "raid_bdev1", 00:24:03.723 "uuid": "46a30270-d2ce-4f72-99af-ff76aab1854f", 00:24:03.723 "strip_size_kb": 0, 00:24:03.723 "state": "online", 00:24:03.723 "raid_level": "raid1", 00:24:03.723 "superblock": true, 00:24:03.723 "num_base_bdevs": 2, 00:24:03.723 "num_base_bdevs_discovered": 2, 00:24:03.723 "num_base_bdevs_operational": 2, 00:24:03.723 "base_bdevs_list": [ 00:24:03.723 { 00:24:03.723 "name": "spare", 00:24:03.723 "uuid": "a419033e-486a-5eff-a4e0-fd7aa590d8db", 00:24:03.723 "is_configured": true, 00:24:03.723 "data_offset": 256, 00:24:03.723 "data_size": 7936 00:24:03.723 }, 00:24:03.723 { 00:24:03.723 "name": "BaseBdev2", 00:24:03.723 "uuid": "cf4fabcc-eb03-5d7e-90b8-3a41c3d03757", 00:24:03.723 "is_configured": true, 00:24:03.723 "data_offset": 256, 00:24:03.723 "data_size": 7936 00:24:03.723 } 00:24:03.723 ] 00:24:03.723 }' 00:24:03.724 12:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:03.982 12:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:03.982 12:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:03.982 12:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:03.982 12:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:03.982 12:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:03.982 12:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:03.982 12:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:03.982 12:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:03.982 12:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:03.982 12:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:03.982 12:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:03.982 12:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:03.982 12:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:03.982 12:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:03.982 12:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.982 12:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:03.982 12:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:03.982 12:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.982 12:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:03.982 "name": "raid_bdev1", 00:24:03.982 "uuid": "46a30270-d2ce-4f72-99af-ff76aab1854f", 00:24:03.982 "strip_size_kb": 0, 00:24:03.982 "state": "online", 00:24:03.982 "raid_level": "raid1", 00:24:03.982 "superblock": true, 00:24:03.982 "num_base_bdevs": 2, 00:24:03.982 "num_base_bdevs_discovered": 2, 00:24:03.982 "num_base_bdevs_operational": 2, 00:24:03.982 "base_bdevs_list": [ 00:24:03.982 { 00:24:03.982 "name": "spare", 00:24:03.982 "uuid": "a419033e-486a-5eff-a4e0-fd7aa590d8db", 00:24:03.982 "is_configured": true, 00:24:03.982 "data_offset": 256, 00:24:03.982 "data_size": 7936 00:24:03.982 }, 00:24:03.982 { 00:24:03.982 "name": "BaseBdev2", 00:24:03.982 "uuid": "cf4fabcc-eb03-5d7e-90b8-3a41c3d03757", 00:24:03.982 "is_configured": true, 00:24:03.982 "data_offset": 256, 00:24:03.982 "data_size": 7936 00:24:03.982 } 00:24:03.982 ] 00:24:03.982 }' 00:24:03.982 12:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:03.982 12:22:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:04.549 12:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:04.549 12:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.549 12:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:04.549 [2024-11-25 12:23:00.374987] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:04.549 [2024-11-25 12:23:00.375177] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:04.549 [2024-11-25 12:23:00.375427] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:04.549 [2024-11-25 12:23:00.375660] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:04.549 [2024-11-25 12:23:00.375801] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:24:04.549 12:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.549 12:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:24:04.549 12:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:04.549 12:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.549 12:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:04.549 12:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.549 12:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:24:04.549 12:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:24:04.549 12:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:24:04.549 12:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:24:04.549 12:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.549 12:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:04.549 12:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.549 12:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:04.549 12:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.549 12:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:04.549 [2024-11-25 12:23:00.446980] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:04.549 [2024-11-25 12:23:00.447047] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:04.549 [2024-11-25 12:23:00.447080] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:24:04.549 [2024-11-25 12:23:00.447094] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:04.549 [2024-11-25 12:23:00.449657] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:04.549 [2024-11-25 12:23:00.449703] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:04.549 [2024-11-25 12:23:00.449781] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:04.549 [2024-11-25 12:23:00.449855] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:04.549 [2024-11-25 12:23:00.450001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:04.549 spare 00:24:04.549 12:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.549 12:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:24:04.549 12:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.549 12:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:04.549 [2024-11-25 12:23:00.550121] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:24:04.549 [2024-11-25 12:23:00.550168] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:24:04.549 [2024-11-25 12:23:00.550314] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:24:04.549 [2024-11-25 12:23:00.550481] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:24:04.549 [2024-11-25 12:23:00.550499] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:24:04.549 [2024-11-25 12:23:00.550646] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:04.549 12:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.549 12:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:04.549 12:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:04.549 12:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:04.549 12:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:04.549 12:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:04.549 12:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:04.549 12:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:04.549 12:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:04.549 12:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:04.549 12:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:04.549 12:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:04.549 12:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:04.549 12:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.549 12:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:04.549 12:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.549 12:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:04.549 "name": "raid_bdev1", 00:24:04.549 "uuid": "46a30270-d2ce-4f72-99af-ff76aab1854f", 00:24:04.549 "strip_size_kb": 0, 00:24:04.550 "state": "online", 00:24:04.550 "raid_level": "raid1", 00:24:04.550 "superblock": true, 00:24:04.550 "num_base_bdevs": 2, 00:24:04.550 "num_base_bdevs_discovered": 2, 00:24:04.550 "num_base_bdevs_operational": 2, 00:24:04.550 "base_bdevs_list": [ 00:24:04.550 { 00:24:04.550 "name": "spare", 00:24:04.550 "uuid": "a419033e-486a-5eff-a4e0-fd7aa590d8db", 00:24:04.550 "is_configured": true, 00:24:04.550 "data_offset": 256, 00:24:04.550 "data_size": 7936 00:24:04.550 }, 00:24:04.550 { 00:24:04.550 "name": "BaseBdev2", 00:24:04.550 "uuid": "cf4fabcc-eb03-5d7e-90b8-3a41c3d03757", 00:24:04.550 "is_configured": true, 00:24:04.550 "data_offset": 256, 00:24:04.550 "data_size": 7936 00:24:04.550 } 00:24:04.550 ] 00:24:04.550 }' 00:24:04.550 12:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:04.550 12:23:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:05.115 12:23:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:05.116 12:23:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:05.116 12:23:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:05.116 12:23:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:05.116 12:23:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:05.116 12:23:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:05.116 12:23:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.116 12:23:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:05.116 12:23:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:05.116 12:23:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.116 12:23:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:05.116 "name": "raid_bdev1", 00:24:05.116 "uuid": "46a30270-d2ce-4f72-99af-ff76aab1854f", 00:24:05.116 "strip_size_kb": 0, 00:24:05.116 "state": "online", 00:24:05.116 "raid_level": "raid1", 00:24:05.116 "superblock": true, 00:24:05.116 "num_base_bdevs": 2, 00:24:05.116 "num_base_bdevs_discovered": 2, 00:24:05.116 "num_base_bdevs_operational": 2, 00:24:05.116 "base_bdevs_list": [ 00:24:05.116 { 00:24:05.116 "name": "spare", 00:24:05.116 "uuid": "a419033e-486a-5eff-a4e0-fd7aa590d8db", 00:24:05.116 "is_configured": true, 00:24:05.116 "data_offset": 256, 00:24:05.116 "data_size": 7936 00:24:05.116 }, 00:24:05.116 { 00:24:05.116 "name": "BaseBdev2", 00:24:05.116 "uuid": "cf4fabcc-eb03-5d7e-90b8-3a41c3d03757", 00:24:05.116 "is_configured": true, 00:24:05.116 "data_offset": 256, 00:24:05.116 "data_size": 7936 00:24:05.116 } 00:24:05.116 ] 00:24:05.116 }' 00:24:05.116 12:23:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:05.116 12:23:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:05.116 12:23:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:05.374 12:23:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:05.374 12:23:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:05.374 12:23:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.374 12:23:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:05.374 12:23:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:24:05.374 12:23:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.374 12:23:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:24:05.374 12:23:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:24:05.374 12:23:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.374 12:23:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:05.374 [2024-11-25 12:23:01.271312] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:05.374 12:23:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.374 12:23:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:05.374 12:23:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:05.374 12:23:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:05.374 12:23:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:05.374 12:23:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:05.374 12:23:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:05.374 12:23:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:05.374 12:23:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:05.374 12:23:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:05.374 12:23:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:05.374 12:23:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:05.374 12:23:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:05.374 12:23:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.374 12:23:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:05.374 12:23:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.374 12:23:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:05.374 "name": "raid_bdev1", 00:24:05.374 "uuid": "46a30270-d2ce-4f72-99af-ff76aab1854f", 00:24:05.374 "strip_size_kb": 0, 00:24:05.374 "state": "online", 00:24:05.374 "raid_level": "raid1", 00:24:05.374 "superblock": true, 00:24:05.374 "num_base_bdevs": 2, 00:24:05.374 "num_base_bdevs_discovered": 1, 00:24:05.374 "num_base_bdevs_operational": 1, 00:24:05.374 "base_bdevs_list": [ 00:24:05.374 { 00:24:05.374 "name": null, 00:24:05.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:05.374 "is_configured": false, 00:24:05.374 "data_offset": 0, 00:24:05.374 "data_size": 7936 00:24:05.374 }, 00:24:05.374 { 00:24:05.374 "name": "BaseBdev2", 00:24:05.374 "uuid": "cf4fabcc-eb03-5d7e-90b8-3a41c3d03757", 00:24:05.374 "is_configured": true, 00:24:05.374 "data_offset": 256, 00:24:05.374 "data_size": 7936 00:24:05.374 } 00:24:05.374 ] 00:24:05.374 }' 00:24:05.375 12:23:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:05.375 12:23:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:05.941 12:23:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:05.941 12:23:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.941 12:23:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:05.941 [2024-11-25 12:23:01.827527] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:05.941 [2024-11-25 12:23:01.827771] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:24:05.941 [2024-11-25 12:23:01.827797] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:05.941 [2024-11-25 12:23:01.827850] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:05.941 [2024-11-25 12:23:01.843678] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:24:05.941 12:23:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.941 12:23:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:24:05.941 [2024-11-25 12:23:01.846219] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:06.875 12:23:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:06.875 12:23:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:06.875 12:23:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:06.875 12:23:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:06.875 12:23:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:06.875 12:23:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:06.875 12:23:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:06.875 12:23:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.875 12:23:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:06.875 12:23:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.875 12:23:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:06.875 "name": "raid_bdev1", 00:24:06.875 "uuid": "46a30270-d2ce-4f72-99af-ff76aab1854f", 00:24:06.875 "strip_size_kb": 0, 00:24:06.875 "state": "online", 00:24:06.875 "raid_level": "raid1", 00:24:06.875 "superblock": true, 00:24:06.875 "num_base_bdevs": 2, 00:24:06.875 "num_base_bdevs_discovered": 2, 00:24:06.875 "num_base_bdevs_operational": 2, 00:24:06.875 "process": { 00:24:06.875 "type": "rebuild", 00:24:06.875 "target": "spare", 00:24:06.875 "progress": { 00:24:06.875 "blocks": 2560, 00:24:06.875 "percent": 32 00:24:06.875 } 00:24:06.875 }, 00:24:06.875 "base_bdevs_list": [ 00:24:06.875 { 00:24:06.875 "name": "spare", 00:24:06.875 "uuid": "a419033e-486a-5eff-a4e0-fd7aa590d8db", 00:24:06.875 "is_configured": true, 00:24:06.875 "data_offset": 256, 00:24:06.875 "data_size": 7936 00:24:06.875 }, 00:24:06.875 { 00:24:06.875 "name": "BaseBdev2", 00:24:06.875 "uuid": "cf4fabcc-eb03-5d7e-90b8-3a41c3d03757", 00:24:06.875 "is_configured": true, 00:24:06.875 "data_offset": 256, 00:24:06.875 "data_size": 7936 00:24:06.875 } 00:24:06.875 ] 00:24:06.875 }' 00:24:06.875 12:23:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:06.875 12:23:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:06.875 12:23:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:07.134 12:23:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:07.134 12:23:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:24:07.134 12:23:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.134 12:23:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:07.134 [2024-11-25 12:23:03.007259] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:07.134 [2024-11-25 12:23:03.054912] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:07.134 [2024-11-25 12:23:03.054999] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:07.134 [2024-11-25 12:23:03.055024] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:07.134 [2024-11-25 12:23:03.055039] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:07.134 12:23:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.134 12:23:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:07.134 12:23:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:07.134 12:23:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:07.134 12:23:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:07.134 12:23:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:07.134 12:23:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:07.134 12:23:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:07.134 12:23:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:07.134 12:23:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:07.134 12:23:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:07.134 12:23:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:07.134 12:23:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.134 12:23:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:07.134 12:23:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:07.134 12:23:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.134 12:23:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:07.134 "name": "raid_bdev1", 00:24:07.134 "uuid": "46a30270-d2ce-4f72-99af-ff76aab1854f", 00:24:07.134 "strip_size_kb": 0, 00:24:07.134 "state": "online", 00:24:07.134 "raid_level": "raid1", 00:24:07.134 "superblock": true, 00:24:07.134 "num_base_bdevs": 2, 00:24:07.134 "num_base_bdevs_discovered": 1, 00:24:07.134 "num_base_bdevs_operational": 1, 00:24:07.134 "base_bdevs_list": [ 00:24:07.134 { 00:24:07.134 "name": null, 00:24:07.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:07.134 "is_configured": false, 00:24:07.134 "data_offset": 0, 00:24:07.134 "data_size": 7936 00:24:07.134 }, 00:24:07.134 { 00:24:07.134 "name": "BaseBdev2", 00:24:07.134 "uuid": "cf4fabcc-eb03-5d7e-90b8-3a41c3d03757", 00:24:07.134 "is_configured": true, 00:24:07.134 "data_offset": 256, 00:24:07.134 "data_size": 7936 00:24:07.134 } 00:24:07.134 ] 00:24:07.134 }' 00:24:07.134 12:23:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:07.134 12:23:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:07.701 12:23:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:07.701 12:23:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.701 12:23:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:07.701 [2024-11-25 12:23:03.603240] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:07.701 [2024-11-25 12:23:03.603322] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:07.701 [2024-11-25 12:23:03.603391] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:24:07.701 [2024-11-25 12:23:03.603411] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:07.701 [2024-11-25 12:23:03.603658] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:07.701 [2024-11-25 12:23:03.603690] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:07.701 [2024-11-25 12:23:03.603767] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:07.701 [2024-11-25 12:23:03.603791] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:24:07.701 [2024-11-25 12:23:03.603804] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:07.701 [2024-11-25 12:23:03.603854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:07.701 [2024-11-25 12:23:03.619715] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:24:07.701 spare 00:24:07.701 12:23:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.701 12:23:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:24:07.701 [2024-11-25 12:23:03.622144] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:08.635 12:23:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:08.635 12:23:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:08.635 12:23:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:08.635 12:23:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:08.635 12:23:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:08.635 12:23:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:08.635 12:23:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.635 12:23:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:08.635 12:23:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:08.635 12:23:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.635 12:23:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:08.635 "name": "raid_bdev1", 00:24:08.635 "uuid": "46a30270-d2ce-4f72-99af-ff76aab1854f", 00:24:08.635 "strip_size_kb": 0, 00:24:08.635 "state": "online", 00:24:08.635 "raid_level": "raid1", 00:24:08.635 "superblock": true, 00:24:08.635 "num_base_bdevs": 2, 00:24:08.635 "num_base_bdevs_discovered": 2, 00:24:08.635 "num_base_bdevs_operational": 2, 00:24:08.635 "process": { 00:24:08.635 "type": "rebuild", 00:24:08.635 "target": "spare", 00:24:08.635 "progress": { 00:24:08.635 "blocks": 2560, 00:24:08.635 "percent": 32 00:24:08.635 } 00:24:08.635 }, 00:24:08.635 "base_bdevs_list": [ 00:24:08.636 { 00:24:08.636 "name": "spare", 00:24:08.636 "uuid": "a419033e-486a-5eff-a4e0-fd7aa590d8db", 00:24:08.636 "is_configured": true, 00:24:08.636 "data_offset": 256, 00:24:08.636 "data_size": 7936 00:24:08.636 }, 00:24:08.636 { 00:24:08.636 "name": "BaseBdev2", 00:24:08.636 "uuid": "cf4fabcc-eb03-5d7e-90b8-3a41c3d03757", 00:24:08.636 "is_configured": true, 00:24:08.636 "data_offset": 256, 00:24:08.636 "data_size": 7936 00:24:08.636 } 00:24:08.636 ] 00:24:08.636 }' 00:24:08.636 12:23:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:08.636 12:23:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:08.636 12:23:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:08.894 12:23:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:08.894 12:23:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:24:08.894 12:23:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.894 12:23:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:08.894 [2024-11-25 12:23:04.779582] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:08.894 [2024-11-25 12:23:04.830907] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:08.894 [2024-11-25 12:23:04.831158] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:08.894 [2024-11-25 12:23:04.831312] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:08.894 [2024-11-25 12:23:04.831402] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:08.894 12:23:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.894 12:23:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:08.894 12:23:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:08.894 12:23:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:08.894 12:23:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:08.894 12:23:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:08.894 12:23:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:08.894 12:23:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:08.894 12:23:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:08.894 12:23:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:08.894 12:23:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:08.894 12:23:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:08.894 12:23:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.894 12:23:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:08.894 12:23:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:08.894 12:23:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.895 12:23:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:08.895 "name": "raid_bdev1", 00:24:08.895 "uuid": "46a30270-d2ce-4f72-99af-ff76aab1854f", 00:24:08.895 "strip_size_kb": 0, 00:24:08.895 "state": "online", 00:24:08.895 "raid_level": "raid1", 00:24:08.895 "superblock": true, 00:24:08.895 "num_base_bdevs": 2, 00:24:08.895 "num_base_bdevs_discovered": 1, 00:24:08.895 "num_base_bdevs_operational": 1, 00:24:08.895 "base_bdevs_list": [ 00:24:08.895 { 00:24:08.895 "name": null, 00:24:08.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:08.895 "is_configured": false, 00:24:08.895 "data_offset": 0, 00:24:08.895 "data_size": 7936 00:24:08.895 }, 00:24:08.895 { 00:24:08.895 "name": "BaseBdev2", 00:24:08.895 "uuid": "cf4fabcc-eb03-5d7e-90b8-3a41c3d03757", 00:24:08.895 "is_configured": true, 00:24:08.895 "data_offset": 256, 00:24:08.895 "data_size": 7936 00:24:08.895 } 00:24:08.895 ] 00:24:08.895 }' 00:24:08.895 12:23:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:08.895 12:23:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:09.462 12:23:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:09.462 12:23:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:09.462 12:23:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:09.462 12:23:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:09.462 12:23:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:09.462 12:23:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:09.462 12:23:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.462 12:23:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:09.462 12:23:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:09.462 12:23:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.462 12:23:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:09.462 "name": "raid_bdev1", 00:24:09.462 "uuid": "46a30270-d2ce-4f72-99af-ff76aab1854f", 00:24:09.462 "strip_size_kb": 0, 00:24:09.462 "state": "online", 00:24:09.462 "raid_level": "raid1", 00:24:09.462 "superblock": true, 00:24:09.462 "num_base_bdevs": 2, 00:24:09.462 "num_base_bdevs_discovered": 1, 00:24:09.462 "num_base_bdevs_operational": 1, 00:24:09.462 "base_bdevs_list": [ 00:24:09.462 { 00:24:09.462 "name": null, 00:24:09.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:09.462 "is_configured": false, 00:24:09.462 "data_offset": 0, 00:24:09.462 "data_size": 7936 00:24:09.462 }, 00:24:09.462 { 00:24:09.462 "name": "BaseBdev2", 00:24:09.462 "uuid": "cf4fabcc-eb03-5d7e-90b8-3a41c3d03757", 00:24:09.462 "is_configured": true, 00:24:09.462 "data_offset": 256, 00:24:09.462 "data_size": 7936 00:24:09.462 } 00:24:09.462 ] 00:24:09.462 }' 00:24:09.462 12:23:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:09.462 12:23:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:09.462 12:23:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:09.462 12:23:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:09.462 12:23:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:24:09.462 12:23:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.462 12:23:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:09.462 12:23:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.462 12:23:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:09.462 12:23:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.462 12:23:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:09.462 [2024-11-25 12:23:05.540179] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:09.462 [2024-11-25 12:23:05.540252] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:09.462 [2024-11-25 12:23:05.540286] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:24:09.462 [2024-11-25 12:23:05.540301] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:09.463 [2024-11-25 12:23:05.540534] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:09.463 [2024-11-25 12:23:05.540558] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:09.463 [2024-11-25 12:23:05.540626] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:24:09.463 [2024-11-25 12:23:05.540647] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:24:09.463 [2024-11-25 12:23:05.540671] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:09.463 [2024-11-25 12:23:05.540684] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:24:09.463 BaseBdev1 00:24:09.463 12:23:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.463 12:23:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:24:10.485 12:23:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:10.485 12:23:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:10.485 12:23:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:10.485 12:23:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:10.485 12:23:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:10.485 12:23:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:10.485 12:23:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:10.485 12:23:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:10.485 12:23:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:10.485 12:23:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:10.485 12:23:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:10.485 12:23:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:10.485 12:23:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.485 12:23:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:10.744 12:23:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.744 12:23:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:10.744 "name": "raid_bdev1", 00:24:10.744 "uuid": "46a30270-d2ce-4f72-99af-ff76aab1854f", 00:24:10.744 "strip_size_kb": 0, 00:24:10.744 "state": "online", 00:24:10.744 "raid_level": "raid1", 00:24:10.744 "superblock": true, 00:24:10.744 "num_base_bdevs": 2, 00:24:10.744 "num_base_bdevs_discovered": 1, 00:24:10.744 "num_base_bdevs_operational": 1, 00:24:10.744 "base_bdevs_list": [ 00:24:10.744 { 00:24:10.744 "name": null, 00:24:10.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:10.744 "is_configured": false, 00:24:10.744 "data_offset": 0, 00:24:10.744 "data_size": 7936 00:24:10.744 }, 00:24:10.744 { 00:24:10.744 "name": "BaseBdev2", 00:24:10.744 "uuid": "cf4fabcc-eb03-5d7e-90b8-3a41c3d03757", 00:24:10.744 "is_configured": true, 00:24:10.744 "data_offset": 256, 00:24:10.744 "data_size": 7936 00:24:10.744 } 00:24:10.744 ] 00:24:10.744 }' 00:24:10.744 12:23:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:10.744 12:23:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:11.012 12:23:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:11.012 12:23:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:11.012 12:23:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:11.012 12:23:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:11.012 12:23:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:11.012 12:23:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:11.012 12:23:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.012 12:23:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:11.012 12:23:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:11.012 12:23:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.275 12:23:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:11.275 "name": "raid_bdev1", 00:24:11.275 "uuid": "46a30270-d2ce-4f72-99af-ff76aab1854f", 00:24:11.275 "strip_size_kb": 0, 00:24:11.275 "state": "online", 00:24:11.275 "raid_level": "raid1", 00:24:11.275 "superblock": true, 00:24:11.275 "num_base_bdevs": 2, 00:24:11.275 "num_base_bdevs_discovered": 1, 00:24:11.275 "num_base_bdevs_operational": 1, 00:24:11.275 "base_bdevs_list": [ 00:24:11.275 { 00:24:11.275 "name": null, 00:24:11.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:11.275 "is_configured": false, 00:24:11.275 "data_offset": 0, 00:24:11.275 "data_size": 7936 00:24:11.275 }, 00:24:11.275 { 00:24:11.275 "name": "BaseBdev2", 00:24:11.275 "uuid": "cf4fabcc-eb03-5d7e-90b8-3a41c3d03757", 00:24:11.275 "is_configured": true, 00:24:11.275 "data_offset": 256, 00:24:11.275 "data_size": 7936 00:24:11.275 } 00:24:11.275 ] 00:24:11.275 }' 00:24:11.275 12:23:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:11.275 12:23:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:11.275 12:23:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:11.275 12:23:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:11.275 12:23:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:11.275 12:23:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:24:11.275 12:23:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:11.275 12:23:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:11.275 12:23:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:11.275 12:23:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:11.275 12:23:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:11.275 12:23:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:11.275 12:23:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.275 12:23:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:11.276 [2024-11-25 12:23:07.232753] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:11.276 [2024-11-25 12:23:07.232957] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:24:11.276 [2024-11-25 12:23:07.232986] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:11.276 request: 00:24:11.276 { 00:24:11.276 "base_bdev": "BaseBdev1", 00:24:11.276 "raid_bdev": "raid_bdev1", 00:24:11.276 "method": "bdev_raid_add_base_bdev", 00:24:11.276 "req_id": 1 00:24:11.276 } 00:24:11.276 Got JSON-RPC error response 00:24:11.276 response: 00:24:11.276 { 00:24:11.276 "code": -22, 00:24:11.276 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:24:11.276 } 00:24:11.276 12:23:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:11.276 12:23:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:24:11.276 12:23:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:11.276 12:23:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:11.276 12:23:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:11.276 12:23:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:24:12.211 12:23:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:12.211 12:23:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:12.211 12:23:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:12.211 12:23:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:12.211 12:23:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:12.211 12:23:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:12.211 12:23:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:12.211 12:23:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:12.211 12:23:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:12.211 12:23:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:12.211 12:23:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:12.211 12:23:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:12.211 12:23:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.211 12:23:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:12.211 12:23:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.211 12:23:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:12.211 "name": "raid_bdev1", 00:24:12.211 "uuid": "46a30270-d2ce-4f72-99af-ff76aab1854f", 00:24:12.211 "strip_size_kb": 0, 00:24:12.211 "state": "online", 00:24:12.211 "raid_level": "raid1", 00:24:12.211 "superblock": true, 00:24:12.211 "num_base_bdevs": 2, 00:24:12.211 "num_base_bdevs_discovered": 1, 00:24:12.211 "num_base_bdevs_operational": 1, 00:24:12.211 "base_bdevs_list": [ 00:24:12.211 { 00:24:12.211 "name": null, 00:24:12.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:12.211 "is_configured": false, 00:24:12.211 "data_offset": 0, 00:24:12.211 "data_size": 7936 00:24:12.211 }, 00:24:12.211 { 00:24:12.211 "name": "BaseBdev2", 00:24:12.211 "uuid": "cf4fabcc-eb03-5d7e-90b8-3a41c3d03757", 00:24:12.211 "is_configured": true, 00:24:12.211 "data_offset": 256, 00:24:12.211 "data_size": 7936 00:24:12.211 } 00:24:12.211 ] 00:24:12.211 }' 00:24:12.211 12:23:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:12.211 12:23:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:12.777 12:23:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:12.777 12:23:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:12.777 12:23:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:12.777 12:23:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:12.777 12:23:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:12.777 12:23:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:12.777 12:23:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.777 12:23:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:12.777 12:23:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:12.777 12:23:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.777 12:23:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:12.777 "name": "raid_bdev1", 00:24:12.777 "uuid": "46a30270-d2ce-4f72-99af-ff76aab1854f", 00:24:12.777 "strip_size_kb": 0, 00:24:12.777 "state": "online", 00:24:12.777 "raid_level": "raid1", 00:24:12.777 "superblock": true, 00:24:12.777 "num_base_bdevs": 2, 00:24:12.777 "num_base_bdevs_discovered": 1, 00:24:12.778 "num_base_bdevs_operational": 1, 00:24:12.778 "base_bdevs_list": [ 00:24:12.778 { 00:24:12.778 "name": null, 00:24:12.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:12.778 "is_configured": false, 00:24:12.778 "data_offset": 0, 00:24:12.778 "data_size": 7936 00:24:12.778 }, 00:24:12.778 { 00:24:12.778 "name": "BaseBdev2", 00:24:12.778 "uuid": "cf4fabcc-eb03-5d7e-90b8-3a41c3d03757", 00:24:12.778 "is_configured": true, 00:24:12.778 "data_offset": 256, 00:24:12.778 "data_size": 7936 00:24:12.778 } 00:24:12.778 ] 00:24:12.778 }' 00:24:12.778 12:23:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:12.778 12:23:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:12.778 12:23:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:13.037 12:23:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:13.037 12:23:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89495 00:24:13.037 12:23:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89495 ']' 00:24:13.037 12:23:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89495 00:24:13.037 12:23:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:24:13.037 12:23:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:13.037 12:23:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89495 00:24:13.037 killing process with pid 89495 00:24:13.037 Received shutdown signal, test time was about 60.000000 seconds 00:24:13.037 00:24:13.037 Latency(us) 00:24:13.037 [2024-11-25T12:23:09.128Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:13.037 [2024-11-25T12:23:09.128Z] =================================================================================================================== 00:24:13.037 [2024-11-25T12:23:09.128Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:13.037 12:23:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:13.037 12:23:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:13.037 12:23:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89495' 00:24:13.037 12:23:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 89495 00:24:13.037 [2024-11-25 12:23:08.929250] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:13.037 12:23:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 89495 00:24:13.037 [2024-11-25 12:23:08.929473] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:13.037 [2024-11-25 12:23:08.929541] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:13.037 [2024-11-25 12:23:08.929563] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:24:13.306 [2024-11-25 12:23:09.205087] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:14.249 12:23:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:24:14.249 ************************************ 00:24:14.249 END TEST raid_rebuild_test_sb_md_interleaved 00:24:14.249 ************************************ 00:24:14.249 00:24:14.249 real 0m18.511s 00:24:14.249 user 0m25.252s 00:24:14.249 sys 0m1.365s 00:24:14.249 12:23:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:14.249 12:23:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:14.249 12:23:10 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:24:14.249 12:23:10 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:24:14.249 12:23:10 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89495 ']' 00:24:14.249 12:23:10 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89495 00:24:14.249 12:23:10 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:24:14.249 00:24:14.249 real 12m58.408s 00:24:14.249 user 18m17.354s 00:24:14.249 sys 1m44.783s 00:24:14.249 12:23:10 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:14.249 12:23:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:14.249 ************************************ 00:24:14.249 END TEST bdev_raid 00:24:14.249 ************************************ 00:24:14.508 12:23:10 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:24:14.508 12:23:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:14.508 12:23:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:14.508 12:23:10 -- common/autotest_common.sh@10 -- # set +x 00:24:14.508 ************************************ 00:24:14.508 START TEST spdkcli_raid 00:24:14.508 ************************************ 00:24:14.508 12:23:10 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:24:14.508 * Looking for test storage... 00:24:14.508 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:24:14.508 12:23:10 spdkcli_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:14.508 12:23:10 spdkcli_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:14.508 12:23:10 spdkcli_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:24:14.508 12:23:10 spdkcli_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:14.508 12:23:10 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:14.508 12:23:10 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:14.508 12:23:10 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:14.508 12:23:10 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:24:14.508 12:23:10 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:24:14.508 12:23:10 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:24:14.508 12:23:10 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:24:14.508 12:23:10 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:24:14.508 12:23:10 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:24:14.508 12:23:10 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:24:14.508 12:23:10 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:14.508 12:23:10 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:24:14.508 12:23:10 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:24:14.508 12:23:10 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:14.508 12:23:10 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:14.508 12:23:10 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:24:14.508 12:23:10 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:24:14.508 12:23:10 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:14.508 12:23:10 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:24:14.508 12:23:10 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:24:14.508 12:23:10 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:24:14.508 12:23:10 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:24:14.508 12:23:10 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:14.508 12:23:10 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:24:14.508 12:23:10 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:24:14.508 12:23:10 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:14.508 12:23:10 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:14.508 12:23:10 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:24:14.508 12:23:10 spdkcli_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:14.508 12:23:10 spdkcli_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:14.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:14.508 --rc genhtml_branch_coverage=1 00:24:14.508 --rc genhtml_function_coverage=1 00:24:14.508 --rc genhtml_legend=1 00:24:14.508 --rc geninfo_all_blocks=1 00:24:14.508 --rc geninfo_unexecuted_blocks=1 00:24:14.508 00:24:14.508 ' 00:24:14.508 12:23:10 spdkcli_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:14.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:14.508 --rc genhtml_branch_coverage=1 00:24:14.508 --rc genhtml_function_coverage=1 00:24:14.508 --rc genhtml_legend=1 00:24:14.508 --rc geninfo_all_blocks=1 00:24:14.508 --rc geninfo_unexecuted_blocks=1 00:24:14.508 00:24:14.508 ' 00:24:14.508 12:23:10 spdkcli_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:14.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:14.508 --rc genhtml_branch_coverage=1 00:24:14.508 --rc genhtml_function_coverage=1 00:24:14.508 --rc genhtml_legend=1 00:24:14.508 --rc geninfo_all_blocks=1 00:24:14.508 --rc geninfo_unexecuted_blocks=1 00:24:14.508 00:24:14.508 ' 00:24:14.508 12:23:10 spdkcli_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:14.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:14.508 --rc genhtml_branch_coverage=1 00:24:14.508 --rc genhtml_function_coverage=1 00:24:14.508 --rc genhtml_legend=1 00:24:14.508 --rc geninfo_all_blocks=1 00:24:14.508 --rc geninfo_unexecuted_blocks=1 00:24:14.508 00:24:14.508 ' 00:24:14.508 12:23:10 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:24:14.508 12:23:10 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:24:14.508 12:23:10 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:24:14.508 12:23:10 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:24:14.508 12:23:10 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:24:14.508 12:23:10 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:24:14.508 12:23:10 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:24:14.508 12:23:10 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:24:14.508 12:23:10 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:24:14.508 12:23:10 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:24:14.508 12:23:10 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:24:14.508 12:23:10 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:24:14.508 12:23:10 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:24:14.508 12:23:10 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:24:14.508 12:23:10 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:24:14.508 12:23:10 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:24:14.508 12:23:10 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:24:14.508 12:23:10 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:24:14.508 12:23:10 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:24:14.508 12:23:10 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:24:14.508 12:23:10 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:24:14.508 12:23:10 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:24:14.508 12:23:10 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:24:14.508 12:23:10 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:24:14.508 12:23:10 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:24:14.508 12:23:10 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:24:14.508 12:23:10 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:24:14.508 12:23:10 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:24:14.508 12:23:10 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:24:14.508 12:23:10 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:24:14.508 12:23:10 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:24:14.508 12:23:10 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:24:14.508 12:23:10 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:24:14.508 12:23:10 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:14.509 12:23:10 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:14.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:14.509 12:23:10 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:24:14.509 12:23:10 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=90172 00:24:14.509 12:23:10 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:24:14.509 12:23:10 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 90172 00:24:14.509 12:23:10 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 90172 ']' 00:24:14.509 12:23:10 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:14.509 12:23:10 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:14.509 12:23:10 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:14.509 12:23:10 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:14.509 12:23:10 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:14.767 [2024-11-25 12:23:10.702432] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:24:14.767 [2024-11-25 12:23:10.702816] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90172 ] 00:24:15.025 [2024-11-25 12:23:10.894794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:15.025 [2024-11-25 12:23:11.055936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:15.025 [2024-11-25 12:23:11.055944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:15.970 12:23:11 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:15.970 12:23:11 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:24:15.970 12:23:11 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:24:15.970 12:23:11 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:15.970 12:23:11 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:15.970 12:23:11 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:24:15.970 12:23:11 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:15.970 12:23:11 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:15.970 12:23:11 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:24:15.970 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:24:15.970 ' 00:24:17.876 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:24:17.876 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:24:17.876 12:23:13 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:24:17.876 12:23:13 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:17.876 12:23:13 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:17.876 12:23:13 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:24:17.876 12:23:13 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:17.876 12:23:13 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:17.876 12:23:13 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:24:17.876 ' 00:24:18.811 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:24:18.811 12:23:14 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:24:18.811 12:23:14 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:18.811 12:23:14 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:19.070 12:23:14 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:24:19.070 12:23:14 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:19.070 12:23:14 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:19.070 12:23:14 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:24:19.070 12:23:14 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:24:19.637 12:23:15 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:24:19.637 12:23:15 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:24:19.637 12:23:15 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:24:19.637 12:23:15 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:19.637 12:23:15 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:19.637 12:23:15 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:24:19.637 12:23:15 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:19.637 12:23:15 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:19.637 12:23:15 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:24:19.637 ' 00:24:20.572 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:24:20.830 12:23:16 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:24:20.830 12:23:16 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:20.830 12:23:16 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:20.830 12:23:16 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:24:20.830 12:23:16 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:20.830 12:23:16 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:20.830 12:23:16 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:24:20.830 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:24:20.830 ' 00:24:22.209 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:24:22.209 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:24:22.466 12:23:18 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:24:22.466 12:23:18 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:22.466 12:23:18 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:22.466 12:23:18 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 90172 00:24:22.466 12:23:18 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90172 ']' 00:24:22.466 12:23:18 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90172 00:24:22.466 12:23:18 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:24:22.466 12:23:18 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:22.466 12:23:18 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90172 00:24:22.466 12:23:18 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:22.466 12:23:18 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:22.466 12:23:18 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90172' 00:24:22.466 killing process with pid 90172 00:24:22.466 12:23:18 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 90172 00:24:22.466 12:23:18 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 90172 00:24:25.082 12:23:20 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:24:25.082 12:23:20 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 90172 ']' 00:24:25.082 12:23:20 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 90172 00:24:25.082 12:23:20 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90172 ']' 00:24:25.082 12:23:20 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90172 00:24:25.082 Process with pid 90172 is not found 00:24:25.082 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (90172) - No such process 00:24:25.082 12:23:20 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 90172 is not found' 00:24:25.082 12:23:20 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:24:25.082 12:23:20 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:24:25.082 12:23:20 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:24:25.082 12:23:20 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:24:25.082 ************************************ 00:24:25.082 END TEST spdkcli_raid 00:24:25.082 ************************************ 00:24:25.082 00:24:25.082 real 0m10.248s 00:24:25.082 user 0m21.252s 00:24:25.082 sys 0m1.144s 00:24:25.082 12:23:20 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:25.082 12:23:20 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:25.082 12:23:20 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:24:25.082 12:23:20 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:25.082 12:23:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:25.082 12:23:20 -- common/autotest_common.sh@10 -- # set +x 00:24:25.082 ************************************ 00:24:25.082 START TEST blockdev_raid5f 00:24:25.082 ************************************ 00:24:25.082 12:23:20 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:24:25.082 * Looking for test storage... 00:24:25.082 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:24:25.082 12:23:20 blockdev_raid5f -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:25.082 12:23:20 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lcov --version 00:24:25.082 12:23:20 blockdev_raid5f -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:25.082 12:23:20 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:25.082 12:23:20 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:25.082 12:23:20 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:25.082 12:23:20 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:25.082 12:23:20 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:24:25.082 12:23:20 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:24:25.082 12:23:20 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:24:25.082 12:23:20 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:24:25.082 12:23:20 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:24:25.082 12:23:20 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:24:25.082 12:23:20 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:24:25.082 12:23:20 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:25.082 12:23:20 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:24:25.082 12:23:20 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:24:25.082 12:23:20 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:25.082 12:23:20 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:25.082 12:23:20 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:24:25.082 12:23:20 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:24:25.082 12:23:20 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:25.082 12:23:20 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:24:25.082 12:23:20 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:24:25.082 12:23:20 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:24:25.082 12:23:20 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:24:25.082 12:23:20 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:25.083 12:23:20 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:24:25.083 12:23:20 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:24:25.083 12:23:20 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:25.083 12:23:20 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:25.083 12:23:20 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:24:25.083 12:23:20 blockdev_raid5f -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:25.083 12:23:20 blockdev_raid5f -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:25.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:25.083 --rc genhtml_branch_coverage=1 00:24:25.083 --rc genhtml_function_coverage=1 00:24:25.083 --rc genhtml_legend=1 00:24:25.083 --rc geninfo_all_blocks=1 00:24:25.083 --rc geninfo_unexecuted_blocks=1 00:24:25.083 00:24:25.083 ' 00:24:25.083 12:23:20 blockdev_raid5f -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:25.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:25.083 --rc genhtml_branch_coverage=1 00:24:25.083 --rc genhtml_function_coverage=1 00:24:25.083 --rc genhtml_legend=1 00:24:25.083 --rc geninfo_all_blocks=1 00:24:25.083 --rc geninfo_unexecuted_blocks=1 00:24:25.083 00:24:25.083 ' 00:24:25.083 12:23:20 blockdev_raid5f -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:25.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:25.083 --rc genhtml_branch_coverage=1 00:24:25.083 --rc genhtml_function_coverage=1 00:24:25.083 --rc genhtml_legend=1 00:24:25.083 --rc geninfo_all_blocks=1 00:24:25.083 --rc geninfo_unexecuted_blocks=1 00:24:25.083 00:24:25.083 ' 00:24:25.083 12:23:20 blockdev_raid5f -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:25.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:25.083 --rc genhtml_branch_coverage=1 00:24:25.083 --rc genhtml_function_coverage=1 00:24:25.083 --rc genhtml_legend=1 00:24:25.083 --rc geninfo_all_blocks=1 00:24:25.083 --rc geninfo_unexecuted_blocks=1 00:24:25.083 00:24:25.083 ' 00:24:25.083 12:23:20 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:24:25.083 12:23:20 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:24:25.083 12:23:20 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:24:25.083 12:23:20 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:24:25.083 12:23:20 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:24:25.083 12:23:20 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:24:25.083 12:23:20 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:24:25.083 12:23:20 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:24:25.083 12:23:20 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:24:25.083 12:23:20 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:24:25.083 12:23:20 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:24:25.083 12:23:20 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:24:25.083 12:23:20 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:24:25.083 12:23:20 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:24:25.083 12:23:20 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:24:25.083 12:23:20 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:24:25.083 12:23:20 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:24:25.083 12:23:20 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:24:25.083 12:23:20 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:24:25.083 12:23:20 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:24:25.083 12:23:20 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:24:25.083 12:23:20 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:24:25.083 12:23:20 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:24:25.083 12:23:20 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:24:25.083 12:23:20 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=90452 00:24:25.083 12:23:20 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:24:25.083 12:23:20 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 90452 00:24:25.083 12:23:20 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:24:25.083 12:23:20 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 90452 ']' 00:24:25.083 12:23:20 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:25.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:25.083 12:23:20 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:25.083 12:23:20 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:25.083 12:23:20 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:25.083 12:23:20 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:25.083 [2024-11-25 12:23:20.961262] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:24:25.083 [2024-11-25 12:23:20.961485] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90452 ] 00:24:25.083 [2024-11-25 12:23:21.146942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:25.343 [2024-11-25 12:23:21.276494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:26.280 12:23:22 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:26.280 12:23:22 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:24:26.280 12:23:22 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:24:26.280 12:23:22 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:24:26.280 12:23:22 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:24:26.280 12:23:22 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.280 12:23:22 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:26.280 Malloc0 00:24:26.280 Malloc1 00:24:26.280 Malloc2 00:24:26.280 12:23:22 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.280 12:23:22 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:24:26.280 12:23:22 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.280 12:23:22 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:26.280 12:23:22 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.280 12:23:22 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:24:26.280 12:23:22 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:24:26.280 12:23:22 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.280 12:23:22 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:26.280 12:23:22 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.280 12:23:22 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:24:26.280 12:23:22 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.280 12:23:22 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:26.540 12:23:22 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.540 12:23:22 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:24:26.540 12:23:22 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.540 12:23:22 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:26.540 12:23:22 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.540 12:23:22 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:24:26.540 12:23:22 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:24:26.540 12:23:22 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.540 12:23:22 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:26.540 12:23:22 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:24:26.540 12:23:22 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.540 12:23:22 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:24:26.540 12:23:22 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:24:26.540 12:23:22 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "e47ffd62-7802-4d7a-b552-8fe2a02a79b4"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "e47ffd62-7802-4d7a-b552-8fe2a02a79b4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "e47ffd62-7802-4d7a-b552-8fe2a02a79b4",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "b956ef27-2e92-442f-92f4-949c1f9668d0",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "52626da0-5b9a-40db-aa7a-4655e98f675d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "01b72c47-7943-4ad9-970d-97f36fee9091",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:24:26.540 12:23:22 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:24:26.540 12:23:22 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:24:26.540 12:23:22 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:24:26.540 12:23:22 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 90452 00:24:26.540 12:23:22 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 90452 ']' 00:24:26.540 12:23:22 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 90452 00:24:26.540 12:23:22 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:24:26.540 12:23:22 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:26.540 12:23:22 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90452 00:24:26.540 killing process with pid 90452 00:24:26.540 12:23:22 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:26.540 12:23:22 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:26.540 12:23:22 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90452' 00:24:26.540 12:23:22 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 90452 00:24:26.540 12:23:22 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 90452 00:24:29.077 12:23:24 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:29.077 12:23:24 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:24:29.077 12:23:24 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:24:29.077 12:23:24 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:29.077 12:23:24 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:29.077 ************************************ 00:24:29.077 START TEST bdev_hello_world 00:24:29.077 ************************************ 00:24:29.077 12:23:24 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:24:29.077 [2024-11-25 12:23:25.105299] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:24:29.077 [2024-11-25 12:23:25.105504] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90514 ] 00:24:29.336 [2024-11-25 12:23:25.289006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:29.336 [2024-11-25 12:23:25.420307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:29.903 [2024-11-25 12:23:25.953904] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:24:29.903 [2024-11-25 12:23:25.954009] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:24:29.903 [2024-11-25 12:23:25.954035] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:24:29.903 [2024-11-25 12:23:25.954686] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:24:29.903 [2024-11-25 12:23:25.954857] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:24:29.903 [2024-11-25 12:23:25.954900] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:24:29.903 [2024-11-25 12:23:25.954975] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:24:29.903 00:24:29.903 [2024-11-25 12:23:25.955011] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:24:31.281 ************************************ 00:24:31.281 END TEST bdev_hello_world 00:24:31.281 ************************************ 00:24:31.281 00:24:31.281 real 0m2.296s 00:24:31.281 user 0m1.863s 00:24:31.281 sys 0m0.306s 00:24:31.281 12:23:27 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:31.281 12:23:27 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:24:31.281 12:23:27 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:24:31.281 12:23:27 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:31.281 12:23:27 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:31.281 12:23:27 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:31.281 ************************************ 00:24:31.281 START TEST bdev_bounds 00:24:31.281 ************************************ 00:24:31.281 12:23:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:24:31.281 12:23:27 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90556 00:24:31.281 Process bdevio pid: 90556 00:24:31.281 12:23:27 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:24:31.281 12:23:27 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:24:31.281 12:23:27 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90556' 00:24:31.281 12:23:27 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90556 00:24:31.281 12:23:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 90556 ']' 00:24:31.281 12:23:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:31.281 12:23:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:31.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:31.281 12:23:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:31.281 12:23:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:31.281 12:23:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:24:31.541 [2024-11-25 12:23:27.444921] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:24:31.541 [2024-11-25 12:23:27.445104] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90556 ] 00:24:31.800 [2024-11-25 12:23:27.638087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:31.800 [2024-11-25 12:23:27.794722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:31.800 [2024-11-25 12:23:27.794792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:31.800 [2024-11-25 12:23:27.794870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:32.735 12:23:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:32.735 12:23:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:24:32.735 12:23:28 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:24:32.735 I/O targets: 00:24:32.735 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:24:32.735 00:24:32.735 00:24:32.735 CUnit - A unit testing framework for C - Version 2.1-3 00:24:32.735 http://cunit.sourceforge.net/ 00:24:32.735 00:24:32.735 00:24:32.735 Suite: bdevio tests on: raid5f 00:24:32.735 Test: blockdev write read block ...passed 00:24:32.735 Test: blockdev write zeroes read block ...passed 00:24:32.735 Test: blockdev write zeroes read no split ...passed 00:24:32.735 Test: blockdev write zeroes read split ...passed 00:24:32.994 Test: blockdev write zeroes read split partial ...passed 00:24:32.994 Test: blockdev reset ...passed 00:24:32.994 Test: blockdev write read 8 blocks ...passed 00:24:32.994 Test: blockdev write read size > 128k ...passed 00:24:32.994 Test: blockdev write read invalid size ...passed 00:24:32.994 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:32.994 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:32.994 Test: blockdev write read max offset ...passed 00:24:32.994 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:32.994 Test: blockdev writev readv 8 blocks ...passed 00:24:32.994 Test: blockdev writev readv 30 x 1block ...passed 00:24:32.994 Test: blockdev writev readv block ...passed 00:24:32.994 Test: blockdev writev readv size > 128k ...passed 00:24:32.994 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:32.994 Test: blockdev comparev and writev ...passed 00:24:32.994 Test: blockdev nvme passthru rw ...passed 00:24:32.994 Test: blockdev nvme passthru vendor specific ...passed 00:24:32.994 Test: blockdev nvme admin passthru ...passed 00:24:32.994 Test: blockdev copy ...passed 00:24:32.994 00:24:32.994 Run Summary: Type Total Ran Passed Failed Inactive 00:24:32.994 suites 1 1 n/a 0 0 00:24:32.994 tests 23 23 23 0 0 00:24:32.994 asserts 130 130 130 0 n/a 00:24:32.994 00:24:32.994 Elapsed time = 0.547 seconds 00:24:32.994 0 00:24:32.994 12:23:28 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90556 00:24:32.994 12:23:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 90556 ']' 00:24:32.994 12:23:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 90556 00:24:32.994 12:23:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:24:32.994 12:23:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:32.994 12:23:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90556 00:24:32.994 12:23:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:32.994 12:23:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:32.994 killing process with pid 90556 00:24:32.994 12:23:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90556' 00:24:32.994 12:23:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 90556 00:24:32.994 12:23:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 90556 00:24:34.380 12:23:30 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:24:34.380 00:24:34.380 real 0m2.899s 00:24:34.380 user 0m7.232s 00:24:34.380 sys 0m0.450s 00:24:34.380 12:23:30 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:34.381 12:23:30 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:24:34.381 ************************************ 00:24:34.381 END TEST bdev_bounds 00:24:34.381 ************************************ 00:24:34.381 12:23:30 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:24:34.381 12:23:30 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:24:34.381 12:23:30 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:34.381 12:23:30 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:34.381 ************************************ 00:24:34.381 START TEST bdev_nbd 00:24:34.381 ************************************ 00:24:34.381 12:23:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:24:34.381 12:23:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:24:34.381 12:23:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:24:34.381 12:23:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:34.381 12:23:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:24:34.381 12:23:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:24:34.381 12:23:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:24:34.381 12:23:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:24:34.381 12:23:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:24:34.381 12:23:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:24:34.381 12:23:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:24:34.381 12:23:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:24:34.381 12:23:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:24:34.381 12:23:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:24:34.381 12:23:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:24:34.381 12:23:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:24:34.381 12:23:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90616 00:24:34.381 12:23:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:24:34.381 12:23:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90616 /var/tmp/spdk-nbd.sock 00:24:34.381 12:23:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:24:34.381 12:23:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 90616 ']' 00:24:34.381 12:23:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:24:34.381 12:23:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:34.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:24:34.381 12:23:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:24:34.381 12:23:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:34.381 12:23:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:24:34.381 [2024-11-25 12:23:30.416687] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:24:34.381 [2024-11-25 12:23:30.416849] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:34.640 [2024-11-25 12:23:30.602862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:34.900 [2024-11-25 12:23:30.747012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:35.505 12:23:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:35.505 12:23:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:24:35.505 12:23:31 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:24:35.505 12:23:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:35.505 12:23:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:24:35.505 12:23:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:24:35.505 12:23:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:24:35.505 12:23:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:35.505 12:23:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:24:35.505 12:23:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:24:35.505 12:23:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:24:35.505 12:23:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:24:35.505 12:23:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:24:35.505 12:23:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:24:35.505 12:23:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:24:35.771 12:23:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:24:35.771 12:23:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:24:35.771 12:23:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:24:35.771 12:23:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:24:35.771 12:23:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:24:35.771 12:23:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:35.771 12:23:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:35.771 12:23:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:24:35.771 12:23:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:24:35.771 12:23:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:35.771 12:23:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:35.771 12:23:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:35.771 1+0 records in 00:24:35.771 1+0 records out 00:24:35.771 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000319823 s, 12.8 MB/s 00:24:35.771 12:23:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:35.771 12:23:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:24:35.771 12:23:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:35.771 12:23:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:35.771 12:23:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:24:35.771 12:23:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:24:35.771 12:23:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:24:35.771 12:23:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:24:36.030 12:23:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:24:36.030 { 00:24:36.030 "nbd_device": "/dev/nbd0", 00:24:36.030 "bdev_name": "raid5f" 00:24:36.030 } 00:24:36.030 ]' 00:24:36.030 12:23:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:24:36.030 12:23:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:24:36.030 { 00:24:36.030 "nbd_device": "/dev/nbd0", 00:24:36.030 "bdev_name": "raid5f" 00:24:36.030 } 00:24:36.030 ]' 00:24:36.030 12:23:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:24:36.030 12:23:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:24:36.030 12:23:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:36.030 12:23:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:36.030 12:23:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:36.030 12:23:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:24:36.030 12:23:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:36.030 12:23:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:24:36.289 12:23:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:36.289 12:23:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:36.289 12:23:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:36.289 12:23:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:36.289 12:23:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:36.289 12:23:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:36.289 12:23:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:36.289 12:23:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:36.289 12:23:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:24:36.289 12:23:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:36.289 12:23:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:24:36.548 12:23:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:24:36.548 12:23:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:24:36.807 12:23:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:24:36.807 12:23:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:24:36.807 12:23:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:24:36.807 12:23:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:24:36.807 12:23:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:24:36.808 12:23:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:24:36.808 12:23:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:24:36.808 12:23:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:24:36.808 12:23:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:24:36.808 12:23:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:24:36.808 12:23:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:24:36.808 12:23:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:36.808 12:23:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:24:36.808 12:23:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:24:36.808 12:23:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:24:36.808 12:23:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:24:36.808 12:23:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:24:36.808 12:23:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:36.808 12:23:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:24:36.808 12:23:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:36.808 12:23:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:36.808 12:23:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:36.808 12:23:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:24:36.808 12:23:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:36.808 12:23:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:36.808 12:23:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:24:37.067 /dev/nbd0 00:24:37.067 12:23:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:37.067 12:23:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:37.067 12:23:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:24:37.067 12:23:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:24:37.067 12:23:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:37.067 12:23:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:37.067 12:23:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:24:37.067 12:23:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:24:37.067 12:23:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:37.067 12:23:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:37.067 12:23:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:37.067 1+0 records in 00:24:37.067 1+0 records out 00:24:37.067 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000404437 s, 10.1 MB/s 00:24:37.067 12:23:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:37.067 12:23:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:24:37.067 12:23:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:37.067 12:23:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:37.067 12:23:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:24:37.067 12:23:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:37.067 12:23:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:37.067 12:23:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:24:37.067 12:23:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:37.067 12:23:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:24:37.325 12:23:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:24:37.325 { 00:24:37.325 "nbd_device": "/dev/nbd0", 00:24:37.325 "bdev_name": "raid5f" 00:24:37.325 } 00:24:37.325 ]' 00:24:37.325 12:23:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:24:37.325 { 00:24:37.325 "nbd_device": "/dev/nbd0", 00:24:37.325 "bdev_name": "raid5f" 00:24:37.325 } 00:24:37.325 ]' 00:24:37.326 12:23:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:24:37.585 12:23:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:24:37.585 12:23:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:24:37.585 12:23:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:24:37.585 12:23:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:24:37.585 12:23:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:24:37.585 12:23:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:24:37.585 12:23:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:24:37.585 12:23:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:24:37.585 12:23:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:24:37.585 12:23:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:24:37.585 12:23:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:24:37.585 12:23:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:24:37.585 12:23:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:24:37.585 12:23:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:24:37.585 256+0 records in 00:24:37.585 256+0 records out 00:24:37.585 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103852 s, 101 MB/s 00:24:37.585 12:23:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:24:37.585 12:23:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:24:37.585 256+0 records in 00:24:37.585 256+0 records out 00:24:37.585 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0423907 s, 24.7 MB/s 00:24:37.585 12:23:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:24:37.585 12:23:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:24:37.585 12:23:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:24:37.585 12:23:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:24:37.585 12:23:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:24:37.585 12:23:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:24:37.585 12:23:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:24:37.585 12:23:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:24:37.585 12:23:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:24:37.585 12:23:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:24:37.585 12:23:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:24:37.585 12:23:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:37.585 12:23:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:37.585 12:23:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:37.585 12:23:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:24:37.585 12:23:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:37.585 12:23:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:24:37.844 12:23:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:37.844 12:23:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:37.844 12:23:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:37.844 12:23:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:37.844 12:23:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:37.844 12:23:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:37.844 12:23:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:37.844 12:23:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:37.844 12:23:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:24:37.844 12:23:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:37.844 12:23:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:24:38.103 12:23:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:24:38.103 12:23:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:24:38.103 12:23:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:24:38.103 12:23:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:24:38.103 12:23:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:24:38.103 12:23:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:24:38.103 12:23:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:24:38.103 12:23:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:24:38.103 12:23:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:24:38.103 12:23:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:24:38.103 12:23:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:24:38.103 12:23:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:24:38.103 12:23:34 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:24:38.103 12:23:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:38.103 12:23:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:24:38.103 12:23:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:24:38.667 malloc_lvol_verify 00:24:38.667 12:23:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:24:38.926 e93ecbed-f162-4cfb-bd7c-9c9fe9781472 00:24:38.926 12:23:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:24:39.184 86180d89-735d-4619-81b2-eb6118e243ef 00:24:39.184 12:23:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:24:39.444 /dev/nbd0 00:24:39.444 12:23:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:24:39.444 12:23:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:24:39.444 12:23:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:24:39.444 12:23:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:24:39.444 12:23:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:24:39.444 mke2fs 1.47.0 (5-Feb-2023) 00:24:39.444 Discarding device blocks: 0/4096 done 00:24:39.444 Creating filesystem with 4096 1k blocks and 1024 inodes 00:24:39.444 00:24:39.444 Allocating group tables: 0/1 done 00:24:39.444 Writing inode tables: 0/1 done 00:24:39.444 Creating journal (1024 blocks): done 00:24:39.444 Writing superblocks and filesystem accounting information: 0/1 done 00:24:39.444 00:24:39.444 12:23:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:24:39.444 12:23:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:39.444 12:23:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:39.444 12:23:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:39.444 12:23:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:24:39.444 12:23:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:39.444 12:23:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:24:39.703 12:23:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:39.703 12:23:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:39.703 12:23:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:39.703 12:23:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:39.703 12:23:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:39.703 12:23:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:39.703 12:23:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:39.703 12:23:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:39.703 12:23:35 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90616 00:24:39.703 12:23:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 90616 ']' 00:24:39.703 12:23:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 90616 00:24:39.703 12:23:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:24:39.703 12:23:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:39.703 12:23:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90616 00:24:39.703 killing process with pid 90616 00:24:39.703 12:23:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:39.703 12:23:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:39.703 12:23:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90616' 00:24:39.703 12:23:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 90616 00:24:39.703 12:23:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 90616 00:24:41.081 12:23:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:24:41.081 ************************************ 00:24:41.081 END TEST bdev_nbd 00:24:41.081 ************************************ 00:24:41.081 00:24:41.081 real 0m6.812s 00:24:41.081 user 0m9.856s 00:24:41.081 sys 0m1.455s 00:24:41.081 12:23:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:41.081 12:23:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:24:41.081 12:23:37 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:24:41.081 12:23:37 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:24:41.081 12:23:37 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:24:41.081 12:23:37 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:24:41.081 12:23:37 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:41.081 12:23:37 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:41.081 12:23:37 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:41.081 ************************************ 00:24:41.081 START TEST bdev_fio 00:24:41.081 ************************************ 00:24:41.081 12:23:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:24:41.081 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:24:41.081 12:23:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:24:41.081 12:23:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:24:41.081 12:23:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:24:41.081 12:23:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:24:41.081 12:23:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:24:41.340 12:23:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:24:41.340 12:23:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:24:41.340 12:23:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:24:41.340 12:23:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:24:41.340 12:23:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:24:41.340 12:23:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:24:41.340 12:23:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:24:41.340 12:23:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:24:41.340 12:23:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:24:41.340 12:23:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:24:41.340 12:23:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:24:41.340 12:23:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:24:41.340 12:23:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:24:41.340 12:23:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:24:41.340 12:23:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:24:41.340 12:23:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:24:41.340 12:23:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:24:41.340 12:23:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:24:41.340 12:23:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:24:41.340 12:23:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:24:41.340 12:23:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:24:41.340 12:23:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:24:41.340 12:23:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:24:41.340 12:23:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:24:41.340 12:23:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:41.340 12:23:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:24:41.340 ************************************ 00:24:41.340 START TEST bdev_fio_rw_verify 00:24:41.340 ************************************ 00:24:41.340 12:23:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:24:41.340 12:23:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:24:41.340 12:23:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:41.340 12:23:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:41.340 12:23:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:41.340 12:23:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:41.340 12:23:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:24:41.340 12:23:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:41.340 12:23:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:41.340 12:23:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:41.340 12:23:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:24:41.340 12:23:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:41.340 12:23:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:24:41.340 12:23:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:24:41.340 12:23:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:24:41.340 12:23:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:41.340 12:23:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:24:41.599 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:24:41.599 fio-3.35 00:24:41.599 Starting 1 thread 00:24:53.800 00:24:53.800 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90832: Mon Nov 25 12:23:48 2024 00:24:53.800 read: IOPS=8455, BW=33.0MiB/s (34.6MB/s)(330MiB/10001msec) 00:24:53.800 slat (usec): min=22, max=1104, avg=29.04, stdev= 6.94 00:24:53.800 clat (usec): min=14, max=1386, avg=188.02, stdev=70.70 00:24:53.800 lat (usec): min=41, max=1416, avg=217.07, stdev=71.58 00:24:53.800 clat percentiles (usec): 00:24:53.800 | 50.000th=[ 190], 99.000th=[ 330], 99.900th=[ 383], 99.990th=[ 1254], 00:24:53.800 | 99.999th=[ 1385] 00:24:53.800 write: IOPS=8929, BW=34.9MiB/s (36.6MB/s)(344MiB/9873msec); 0 zone resets 00:24:53.800 slat (usec): min=11, max=233, avg=23.48, stdev= 5.35 00:24:53.800 clat (usec): min=93, max=1751, avg=430.46, stdev=60.96 00:24:53.800 lat (usec): min=114, max=1793, avg=453.94, stdev=62.77 00:24:53.800 clat percentiles (usec): 00:24:53.800 | 50.000th=[ 437], 99.000th=[ 594], 99.900th=[ 775], 99.990th=[ 1156], 00:24:53.800 | 99.999th=[ 1745] 00:24:53.800 bw ( KiB/s): min=33720, max=38312, per=99.58%, avg=35565.05, stdev=1896.74, samples=19 00:24:53.800 iops : min= 8430, max= 9578, avg=8891.37, stdev=474.31, samples=19 00:24:53.800 lat (usec) : 20=0.01%, 50=0.01%, 100=5.77%, 250=31.78%, 500=58.43% 00:24:53.800 lat (usec) : 750=3.96%, 1000=0.03% 00:24:53.800 lat (msec) : 2=0.03% 00:24:53.800 cpu : usr=98.72%, sys=0.46%, ctx=28, majf=0, minf=7389 00:24:53.800 IO depths : 1=7.7%, 2=19.9%, 4=55.1%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:53.800 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:53.800 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:53.800 issued rwts: total=84564,88157,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:53.800 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:53.800 00:24:53.801 Run status group 0 (all jobs): 00:24:53.801 READ: bw=33.0MiB/s (34.6MB/s), 33.0MiB/s-33.0MiB/s (34.6MB/s-34.6MB/s), io=330MiB (346MB), run=10001-10001msec 00:24:53.801 WRITE: bw=34.9MiB/s (36.6MB/s), 34.9MiB/s-34.9MiB/s (36.6MB/s-36.6MB/s), io=344MiB (361MB), run=9873-9873msec 00:24:54.367 ----------------------------------------------------- 00:24:54.367 Suppressions used: 00:24:54.367 count bytes template 00:24:54.367 1 7 /usr/src/fio/parse.c 00:24:54.367 960 92160 /usr/src/fio/iolog.c 00:24:54.367 1 8 libtcmalloc_minimal.so 00:24:54.367 1 904 libcrypto.so 00:24:54.367 ----------------------------------------------------- 00:24:54.367 00:24:54.367 00:24:54.367 real 0m12.967s 00:24:54.367 user 0m13.177s 00:24:54.367 sys 0m0.977s 00:24:54.367 ************************************ 00:24:54.367 END TEST bdev_fio_rw_verify 00:24:54.367 ************************************ 00:24:54.367 12:23:50 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:54.367 12:23:50 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:24:54.367 12:23:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:24:54.367 12:23:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:24:54.367 12:23:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:24:54.367 12:23:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:24:54.367 12:23:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:24:54.367 12:23:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:24:54.367 12:23:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:24:54.367 12:23:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:24:54.367 12:23:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:24:54.367 12:23:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:24:54.367 12:23:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:24:54.367 12:23:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:24:54.367 12:23:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:24:54.367 12:23:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:24:54.367 12:23:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:24:54.367 12:23:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:24:54.367 12:23:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:24:54.367 12:23:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "e47ffd62-7802-4d7a-b552-8fe2a02a79b4"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "e47ffd62-7802-4d7a-b552-8fe2a02a79b4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "e47ffd62-7802-4d7a-b552-8fe2a02a79b4",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "b956ef27-2e92-442f-92f4-949c1f9668d0",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "52626da0-5b9a-40db-aa7a-4655e98f675d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "01b72c47-7943-4ad9-970d-97f36fee9091",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:24:54.367 12:23:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:24:54.367 12:23:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:24:54.367 /home/vagrant/spdk_repo/spdk 00:24:54.367 12:23:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:24:54.367 12:23:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:24:54.367 12:23:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:24:54.367 00:24:54.367 real 0m13.196s 00:24:54.367 user 0m13.284s 00:24:54.367 sys 0m1.072s 00:24:54.367 12:23:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:54.367 ************************************ 00:24:54.367 END TEST bdev_fio 00:24:54.367 ************************************ 00:24:54.367 12:23:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:24:54.367 12:23:50 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:54.367 12:23:50 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:24:54.367 12:23:50 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:24:54.367 12:23:50 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:54.367 12:23:50 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:54.367 ************************************ 00:24:54.367 START TEST bdev_verify 00:24:54.367 ************************************ 00:24:54.367 12:23:50 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:24:54.626 [2024-11-25 12:23:50.519540] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:24:54.626 [2024-11-25 12:23:50.519719] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90996 ] 00:24:54.626 [2024-11-25 12:23:50.704845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:54.885 [2024-11-25 12:23:50.839992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:54.885 [2024-11-25 12:23:50.840001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:55.452 Running I/O for 5 seconds... 00:24:57.343 12644.00 IOPS, 49.39 MiB/s [2024-11-25T12:23:54.811Z] 13324.00 IOPS, 52.05 MiB/s [2024-11-25T12:23:55.746Z] 13408.00 IOPS, 52.38 MiB/s [2024-11-25T12:23:56.683Z] 13502.50 IOPS, 52.74 MiB/s [2024-11-25T12:23:56.683Z] 13602.80 IOPS, 53.14 MiB/s 00:25:00.592 Latency(us) 00:25:00.592 [2024-11-25T12:23:56.683Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:00.592 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:00.592 Verification LBA range: start 0x0 length 0x2000 00:25:00.592 raid5f : 5.02 6805.75 26.58 0.00 0.00 28384.64 258.79 21567.30 00:25:00.592 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:00.592 Verification LBA range: start 0x2000 length 0x2000 00:25:00.592 raid5f : 5.02 6797.35 26.55 0.00 0.00 28297.20 123.81 25141.99 00:25:00.592 [2024-11-25T12:23:56.683Z] =================================================================================================================== 00:25:00.592 [2024-11-25T12:23:56.683Z] Total : 13603.09 53.14 0.00 0.00 28340.92 123.81 25141.99 00:25:01.968 00:25:01.968 real 0m7.317s 00:25:01.968 user 0m13.398s 00:25:01.968 sys 0m0.324s 00:25:01.968 12:23:57 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:01.968 12:23:57 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:25:01.968 ************************************ 00:25:01.968 END TEST bdev_verify 00:25:01.968 ************************************ 00:25:01.968 12:23:57 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:25:01.968 12:23:57 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:25:01.968 12:23:57 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:01.968 12:23:57 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:25:01.968 ************************************ 00:25:01.968 START TEST bdev_verify_big_io 00:25:01.968 ************************************ 00:25:01.968 12:23:57 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:25:01.968 [2024-11-25 12:23:57.895236] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:25:01.968 [2024-11-25 12:23:57.895479] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91089 ] 00:25:02.226 [2024-11-25 12:23:58.081832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:02.226 [2024-11-25 12:23:58.211598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:02.226 [2024-11-25 12:23:58.211608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:02.794 Running I/O for 5 seconds... 00:25:05.101 693.00 IOPS, 43.31 MiB/s [2024-11-25T12:24:02.155Z] 760.00 IOPS, 47.50 MiB/s [2024-11-25T12:24:03.089Z] 761.33 IOPS, 47.58 MiB/s [2024-11-25T12:24:04.024Z] 761.50 IOPS, 47.59 MiB/s [2024-11-25T12:24:04.024Z] 761.60 IOPS, 47.60 MiB/s 00:25:07.933 Latency(us) 00:25:07.933 [2024-11-25T12:24:04.024Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:07.933 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:25:07.933 Verification LBA range: start 0x0 length 0x200 00:25:07.933 raid5f : 5.20 390.62 24.41 0.00 0.00 8118767.23 197.35 333637.82 00:25:07.933 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:25:07.933 Verification LBA range: start 0x200 length 0x200 00:25:07.933 raid5f : 5.18 392.40 24.53 0.00 0.00 8067080.46 219.69 333637.82 00:25:07.933 [2024-11-25T12:24:04.024Z] =================================================================================================================== 00:25:07.933 [2024-11-25T12:24:04.024Z] Total : 783.02 48.94 0.00 0.00 8092923.85 197.35 333637.82 00:25:09.309 00:25:09.309 real 0m7.481s 00:25:09.309 user 0m13.749s 00:25:09.309 sys 0m0.312s 00:25:09.309 12:24:05 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:09.309 ************************************ 00:25:09.309 END TEST bdev_verify_big_io 00:25:09.309 ************************************ 00:25:09.309 12:24:05 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:25:09.309 12:24:05 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:09.309 12:24:05 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:25:09.309 12:24:05 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:09.309 12:24:05 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:25:09.309 ************************************ 00:25:09.309 START TEST bdev_write_zeroes 00:25:09.309 ************************************ 00:25:09.309 12:24:05 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:09.566 [2024-11-25 12:24:05.399447] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:25:09.566 [2024-11-25 12:24:05.399602] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91188 ] 00:25:09.567 [2024-11-25 12:24:05.576550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:09.825 [2024-11-25 12:24:05.703648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:10.391 Running I/O for 1 seconds... 00:25:11.325 20271.00 IOPS, 79.18 MiB/s 00:25:11.325 Latency(us) 00:25:11.325 [2024-11-25T12:24:07.416Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:11.325 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:25:11.325 raid5f : 1.01 20251.15 79.11 0.00 0.00 6295.91 1906.50 8579.26 00:25:11.325 [2024-11-25T12:24:07.416Z] =================================================================================================================== 00:25:11.325 [2024-11-25T12:24:07.416Z] Total : 20251.15 79.11 0.00 0.00 6295.91 1906.50 8579.26 00:25:12.702 00:25:12.702 real 0m3.254s 00:25:12.702 user 0m2.830s 00:25:12.702 sys 0m0.293s 00:25:12.702 ************************************ 00:25:12.702 END TEST bdev_write_zeroes 00:25:12.702 12:24:08 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:12.702 12:24:08 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:25:12.702 ************************************ 00:25:12.702 12:24:08 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:12.702 12:24:08 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:25:12.702 12:24:08 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:12.702 12:24:08 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:25:12.703 ************************************ 00:25:12.703 START TEST bdev_json_nonenclosed 00:25:12.703 ************************************ 00:25:12.703 12:24:08 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:12.703 [2024-11-25 12:24:08.735068] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:25:12.703 [2024-11-25 12:24:08.735269] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91240 ] 00:25:12.960 [2024-11-25 12:24:08.923842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:13.220 [2024-11-25 12:24:09.056287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:13.220 [2024-11-25 12:24:09.056429] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:25:13.220 [2024-11-25 12:24:09.056473] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:25:13.220 [2024-11-25 12:24:09.056489] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:13.480 00:25:13.480 real 0m0.717s 00:25:13.480 user 0m0.453s 00:25:13.480 sys 0m0.158s 00:25:13.480 12:24:09 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:13.480 ************************************ 00:25:13.480 END TEST bdev_json_nonenclosed 00:25:13.480 ************************************ 00:25:13.480 12:24:09 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:25:13.480 12:24:09 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:13.480 12:24:09 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:25:13.480 12:24:09 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:13.480 12:24:09 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:25:13.480 ************************************ 00:25:13.480 START TEST bdev_json_nonarray 00:25:13.480 ************************************ 00:25:13.480 12:24:09 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:13.480 [2024-11-25 12:24:09.464616] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:25:13.480 [2024-11-25 12:24:09.464771] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91267 ] 00:25:13.738 [2024-11-25 12:24:09.665310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:13.738 [2024-11-25 12:24:09.815628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:13.738 [2024-11-25 12:24:09.815750] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:25:13.738 [2024-11-25 12:24:09.815779] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:25:13.738 [2024-11-25 12:24:09.815804] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:13.996 00:25:13.996 real 0m0.700s 00:25:13.996 user 0m0.466s 00:25:13.996 sys 0m0.128s 00:25:13.996 12:24:10 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:13.996 12:24:10 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:25:13.996 ************************************ 00:25:13.996 END TEST bdev_json_nonarray 00:25:13.996 ************************************ 00:25:14.253 12:24:10 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:25:14.253 12:24:10 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:25:14.253 12:24:10 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:25:14.253 12:24:10 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:25:14.253 12:24:10 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:25:14.253 12:24:10 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:25:14.253 12:24:10 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:25:14.253 12:24:10 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:25:14.253 12:24:10 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:25:14.253 12:24:10 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:25:14.253 12:24:10 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:25:14.253 00:25:14.253 real 0m49.460s 00:25:14.253 user 1m7.576s 00:25:14.253 sys 0m5.466s 00:25:14.253 12:24:10 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:14.253 ************************************ 00:25:14.253 END TEST blockdev_raid5f 00:25:14.253 ************************************ 00:25:14.253 12:24:10 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:25:14.253 12:24:10 -- spdk/autotest.sh@194 -- # uname -s 00:25:14.253 12:24:10 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:25:14.253 12:24:10 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:25:14.253 12:24:10 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:25:14.253 12:24:10 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:25:14.253 12:24:10 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:25:14.253 12:24:10 -- spdk/autotest.sh@260 -- # timing_exit lib 00:25:14.253 12:24:10 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:14.253 12:24:10 -- common/autotest_common.sh@10 -- # set +x 00:25:14.253 12:24:10 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:25:14.253 12:24:10 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:25:14.253 12:24:10 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:25:14.253 12:24:10 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:25:14.253 12:24:10 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:25:14.253 12:24:10 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:25:14.253 12:24:10 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:25:14.253 12:24:10 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:25:14.253 12:24:10 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:25:14.253 12:24:10 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:25:14.253 12:24:10 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:25:14.253 12:24:10 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:25:14.253 12:24:10 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:25:14.253 12:24:10 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:25:14.253 12:24:10 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:25:14.253 12:24:10 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:25:14.253 12:24:10 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:25:14.253 12:24:10 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:25:14.253 12:24:10 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:25:14.253 12:24:10 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:25:14.253 12:24:10 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:14.253 12:24:10 -- common/autotest_common.sh@10 -- # set +x 00:25:14.253 12:24:10 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:25:14.253 12:24:10 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:25:14.253 12:24:10 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:25:14.253 12:24:10 -- common/autotest_common.sh@10 -- # set +x 00:25:15.625 INFO: APP EXITING 00:25:15.625 INFO: killing all VMs 00:25:15.625 INFO: killing vhost app 00:25:15.625 INFO: EXIT DONE 00:25:15.883 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:16.142 Waiting for block devices as requested 00:25:16.142 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:25:16.142 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:25:17.077 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:17.077 Cleaning 00:25:17.077 Removing: /var/run/dpdk/spdk0/config 00:25:17.077 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:25:17.077 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:25:17.077 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:25:17.077 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:25:17.077 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:25:17.077 Removing: /var/run/dpdk/spdk0/hugepage_info 00:25:17.077 Removing: /dev/shm/spdk_tgt_trace.pid56855 00:25:17.077 Removing: /var/run/dpdk/spdk0 00:25:17.077 Removing: /var/run/dpdk/spdk_pid56625 00:25:17.077 Removing: /var/run/dpdk/spdk_pid56855 00:25:17.077 Removing: /var/run/dpdk/spdk_pid57089 00:25:17.077 Removing: /var/run/dpdk/spdk_pid57193 00:25:17.077 Removing: /var/run/dpdk/spdk_pid57244 00:25:17.077 Removing: /var/run/dpdk/spdk_pid57372 00:25:17.077 Removing: /var/run/dpdk/spdk_pid57401 00:25:17.077 Removing: /var/run/dpdk/spdk_pid57600 00:25:17.077 Removing: /var/run/dpdk/spdk_pid57717 00:25:17.077 Removing: /var/run/dpdk/spdk_pid57824 00:25:17.077 Removing: /var/run/dpdk/spdk_pid57946 00:25:17.077 Removing: /var/run/dpdk/spdk_pid58054 00:25:17.077 Removing: /var/run/dpdk/spdk_pid58088 00:25:17.077 Removing: /var/run/dpdk/spdk_pid58130 00:25:17.077 Removing: /var/run/dpdk/spdk_pid58206 00:25:17.077 Removing: /var/run/dpdk/spdk_pid58312 00:25:17.077 Removing: /var/run/dpdk/spdk_pid58793 00:25:17.077 Removing: /var/run/dpdk/spdk_pid58869 00:25:17.077 Removing: /var/run/dpdk/spdk_pid58951 00:25:17.077 Removing: /var/run/dpdk/spdk_pid58967 00:25:17.077 Removing: /var/run/dpdk/spdk_pid59123 00:25:17.077 Removing: /var/run/dpdk/spdk_pid59145 00:25:17.077 Removing: /var/run/dpdk/spdk_pid59293 00:25:17.077 Removing: /var/run/dpdk/spdk_pid59315 00:25:17.077 Removing: /var/run/dpdk/spdk_pid59384 00:25:17.077 Removing: /var/run/dpdk/spdk_pid59408 00:25:17.077 Removing: /var/run/dpdk/spdk_pid59472 00:25:17.077 Removing: /var/run/dpdk/spdk_pid59490 00:25:17.077 Removing: /var/run/dpdk/spdk_pid59685 00:25:17.077 Removing: /var/run/dpdk/spdk_pid59727 00:25:17.077 Removing: /var/run/dpdk/spdk_pid59815 00:25:17.077 Removing: /var/run/dpdk/spdk_pid61180 00:25:17.077 Removing: /var/run/dpdk/spdk_pid61392 00:25:17.077 Removing: /var/run/dpdk/spdk_pid61545 00:25:17.077 Removing: /var/run/dpdk/spdk_pid62195 00:25:17.077 Removing: /var/run/dpdk/spdk_pid62412 00:25:17.077 Removing: /var/run/dpdk/spdk_pid62554 00:25:17.077 Removing: /var/run/dpdk/spdk_pid63214 00:25:17.077 Removing: /var/run/dpdk/spdk_pid63546 00:25:17.077 Removing: /var/run/dpdk/spdk_pid63692 00:25:17.077 Removing: /var/run/dpdk/spdk_pid65099 00:25:17.077 Removing: /var/run/dpdk/spdk_pid65357 00:25:17.077 Removing: /var/run/dpdk/spdk_pid65503 00:25:17.077 Removing: /var/run/dpdk/spdk_pid66916 00:25:17.077 Removing: /var/run/dpdk/spdk_pid67169 00:25:17.077 Removing: /var/run/dpdk/spdk_pid67320 00:25:17.077 Removing: /var/run/dpdk/spdk_pid68728 00:25:17.077 Removing: /var/run/dpdk/spdk_pid69180 00:25:17.077 Removing: /var/run/dpdk/spdk_pid69326 00:25:17.077 Removing: /var/run/dpdk/spdk_pid70837 00:25:17.077 Removing: /var/run/dpdk/spdk_pid71102 00:25:17.077 Removing: /var/run/dpdk/spdk_pid71253 00:25:17.077 Removing: /var/run/dpdk/spdk_pid72765 00:25:17.077 Removing: /var/run/dpdk/spdk_pid73026 00:25:17.077 Removing: /var/run/dpdk/spdk_pid73175 00:25:17.077 Removing: /var/run/dpdk/spdk_pid74678 00:25:17.077 Removing: /var/run/dpdk/spdk_pid75176 00:25:17.077 Removing: /var/run/dpdk/spdk_pid75322 00:25:17.077 Removing: /var/run/dpdk/spdk_pid75460 00:25:17.077 Removing: /var/run/dpdk/spdk_pid75918 00:25:17.077 Removing: /var/run/dpdk/spdk_pid76689 00:25:17.077 Removing: /var/run/dpdk/spdk_pid77095 00:25:17.077 Removing: /var/run/dpdk/spdk_pid77796 00:25:17.077 Removing: /var/run/dpdk/spdk_pid78277 00:25:17.077 Removing: /var/run/dpdk/spdk_pid79076 00:25:17.077 Removing: /var/run/dpdk/spdk_pid79492 00:25:17.077 Removing: /var/run/dpdk/spdk_pid81486 00:25:17.078 Removing: /var/run/dpdk/spdk_pid81939 00:25:17.078 Removing: /var/run/dpdk/spdk_pid82391 00:25:17.078 Removing: /var/run/dpdk/spdk_pid84517 00:25:17.078 Removing: /var/run/dpdk/spdk_pid85008 00:25:17.078 Removing: /var/run/dpdk/spdk_pid85517 00:25:17.078 Removing: /var/run/dpdk/spdk_pid86586 00:25:17.337 Removing: /var/run/dpdk/spdk_pid86920 00:25:17.337 Removing: /var/run/dpdk/spdk_pid87878 00:25:17.337 Removing: /var/run/dpdk/spdk_pid88206 00:25:17.337 Removing: /var/run/dpdk/spdk_pid89161 00:25:17.337 Removing: /var/run/dpdk/spdk_pid89495 00:25:17.337 Removing: /var/run/dpdk/spdk_pid90172 00:25:17.337 Removing: /var/run/dpdk/spdk_pid90452 00:25:17.337 Removing: /var/run/dpdk/spdk_pid90514 00:25:17.337 Removing: /var/run/dpdk/spdk_pid90556 00:25:17.337 Removing: /var/run/dpdk/spdk_pid90817 00:25:17.337 Removing: /var/run/dpdk/spdk_pid90996 00:25:17.338 Removing: /var/run/dpdk/spdk_pid91089 00:25:17.338 Removing: /var/run/dpdk/spdk_pid91188 00:25:17.338 Removing: /var/run/dpdk/spdk_pid91240 00:25:17.338 Removing: /var/run/dpdk/spdk_pid91267 00:25:17.338 Clean 00:25:17.338 12:24:13 -- common/autotest_common.sh@1453 -- # return 0 00:25:17.338 12:24:13 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:25:17.338 12:24:13 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:17.338 12:24:13 -- common/autotest_common.sh@10 -- # set +x 00:25:17.338 12:24:13 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:25:17.338 12:24:13 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:17.338 12:24:13 -- common/autotest_common.sh@10 -- # set +x 00:25:17.338 12:24:13 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:25:17.338 12:24:13 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:25:17.338 12:24:13 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:25:17.338 12:24:13 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:25:17.338 12:24:13 -- spdk/autotest.sh@398 -- # hostname 00:25:17.338 12:24:13 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:25:17.597 geninfo: WARNING: invalid characters removed from testname! 00:25:44.144 12:24:39 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:48.331 12:24:43 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:50.864 12:24:46 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:54.199 12:24:49 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:56.734 12:24:52 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:59.296 12:24:55 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:02.680 12:24:58 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:26:02.680 12:24:58 -- spdk/autorun.sh@1 -- $ timing_finish 00:26:02.680 12:24:58 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:26:02.680 12:24:58 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:26:02.680 12:24:58 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:26:02.680 12:24:58 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:26:02.680 + [[ -n 5210 ]] 00:26:02.680 + sudo kill 5210 00:26:02.689 [Pipeline] } 00:26:02.705 [Pipeline] // timeout 00:26:02.713 [Pipeline] } 00:26:02.728 [Pipeline] // stage 00:26:02.736 [Pipeline] } 00:26:02.753 [Pipeline] // catchError 00:26:02.763 [Pipeline] stage 00:26:02.766 [Pipeline] { (Stop VM) 00:26:02.779 [Pipeline] sh 00:26:03.059 + vagrant halt 00:26:06.399 ==> default: Halting domain... 00:26:12.976 [Pipeline] sh 00:26:13.255 + vagrant destroy -f 00:26:16.562 ==> default: Removing domain... 00:26:16.575 [Pipeline] sh 00:26:16.857 + mv output /var/jenkins/workspace/raid-vg-autotest_2/output 00:26:16.866 [Pipeline] } 00:26:16.884 [Pipeline] // stage 00:26:16.890 [Pipeline] } 00:26:16.955 [Pipeline] // dir 00:26:16.961 [Pipeline] } 00:26:16.975 [Pipeline] // wrap 00:26:16.980 [Pipeline] } 00:26:16.994 [Pipeline] // catchError 00:26:17.004 [Pipeline] stage 00:26:17.006 [Pipeline] { (Epilogue) 00:26:17.020 [Pipeline] sh 00:26:17.318 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:26:23.892 [Pipeline] catchError 00:26:23.894 [Pipeline] { 00:26:23.910 [Pipeline] sh 00:26:24.192 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:26:24.451 Artifacts sizes are good 00:26:24.460 [Pipeline] } 00:26:24.475 [Pipeline] // catchError 00:26:24.486 [Pipeline] archiveArtifacts 00:26:24.493 Archiving artifacts 00:26:24.599 [Pipeline] cleanWs 00:26:24.611 [WS-CLEANUP] Deleting project workspace... 00:26:24.611 [WS-CLEANUP] Deferred wipeout is used... 00:26:24.617 [WS-CLEANUP] done 00:26:24.619 [Pipeline] } 00:26:24.635 [Pipeline] // stage 00:26:24.640 [Pipeline] } 00:26:24.655 [Pipeline] // node 00:26:24.660 [Pipeline] End of Pipeline 00:26:24.696 Finished: SUCCESS